This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.

Program Monitoring & Evaluation: Leveraging Your Strengths and Smoothing Out the Hiccups

Overwhelmed? You don’t have to be!

(image source)

You’ve figured out the Who, Why, When, Where, What, What For, and How of your program or workshop. You know what it means to be S.M.A.R.T. about your goals. You’ve tested out creative ways to get your objectives across. Finally, you’ve considered gathering feedback during your activity implementation. If you haven’t done any of this yet and would like to know more about how to do this, check out the four proven ways to increase the effectiveness of your program and workshops, and come back to this blog post.

If you have read it and/or have implemented some of the strategies I mentioned above, great! I hope that you found them useful, whether you are a seasoned nonprofit professional or someone who wants to provide meaningful programs and workshops for your community. Now it’s time to get to the second part of the equation. Let’s shift the focus to a separate but equally important issue: finding out if what you’re doing is actually effective.

So, how do you do figure out if what you’re doing is effective? You monitor and evaluate.  Monitoring and evaluation are the best tools in your arsenal that can show you are moving in the right direction, or if you’ve hit a snag somewhere. Here’s a breakdown of each one, how they work together, and five key things to keep in mind when monitoring and evaluating your program or workshop so that you can continue to leverage your strengths and smooth out your hiccups (because we don’t believe in weaknesses!)

Monitoring and Evaluation: What Are the Differences?

Before we get into the differences between monitoring and evaluation, here is why each element is important:

Evaluation helps us to see what went well and what needs to be smoothed out. In order to be as effective as possible and to promote lasting change, you need to know where your program or workshop stands right now. Sometimes after you conduct an evaluation, you’ll find that all you need to do is a little tweaking, or you may have to do a complete overhaul. Secondly, evaluation helps you to plan for the future when it comes to implementing more programs or workshops.  Third, evaluation also shows your program or workshop’s strengths as well as its hiccups, showing you where the gaps are and how to fill them in.

Monitoring allows you to see how the program or workshop is doing while you’re in the thick of it. You’re making observations, including how the participants are responding to the subject matter, how smoothly the activities flow, if the room is set up to allow movement and if you’ve made things accessible, and if your program is tailored to the appropriate audience, among other things.

The terms “monitoring” and “evaluation” are often used interchangeably, but they do have some key differences. Monitoring occurs during the implementation of your program or workshop’s activities. As mentioned above, you’re constantly collecting data on the overall design and flow and making early analyses.  Evaluation, on the other hand, typically happens at the completion of the workshop or program, keeping in mind that the time frame of a workshop or program can vary. The primary focus of evaluation is to not only provide feedback on why an aspect of your workshop or program was successful (or not), but to also present recommendations on how to improve.

Putting it All Together

There are different types of monitoring and evaluation, with each one building on the last. Based on the Centers for Disease Control’s guidance on evaluating HIV prevention programs, these types make up the foundation for effective monitoring and evaluation. They include:

Formative evaluation (the base)- Collects data on the needs of the target audience in order to determine how best to design your workshop or program to address needs, as well as examines the appropriateness of your workshop or program.

Process monitoring- Collects data on the characteristics of the target audience, what services are provided to them, and what resources are available.

Process evaluation- Collects more detailed data on how the activities of the program or workshop were delivered, if the implementation reached the intended audience, and what barriers were observed. (Did you reach the number of people you said that you would?)

Outcome monitoring- Collecting data about outcomes, including behavior change and learned skills. (Were the outcomes expected?)

Outcome evaluation- Similar to outcome monitoring, but compares the audience receiving the workshop or program to a similar group that has not participated in the workshop or program. (Did both groups have similar outcomes? Is what you’re seeing a result of the activities or is it from something else?)

Impact evaluation (the top)- What was the broader impact of the activities in your workshop or program?

Five Key Things to Keep in Mind

Approach monitoring and evaluation from a positive standpoint: Some common misconceptions about evaluation and monitoring are that it’s too expensive, too academic, too time-consuming, too exclusive, and too useless. This couldn’t be further from the truth. If you approach monitoring and evaluation from this viewpoint, it will forever be daunting and you’ll never want to do it, which would be a disservice for you and the communities you work with. Evaluation and monitoring can be engaging, cost-effective, participatory, and very useful when you allow yourself to be open to the process. When you come from a mindset that it’s going to help you improve what you’re doing, you’ll see the usefulness of it.

Vary your tools: Some common ways to conduct monitoring and evaluation include focus groups, needs assessments, surveys, key informant interviews, logic models, public forums, data collection, and even direct observation. Each method has its pros and cons, and can often be based on timing, funding, and even the culture of your target audience (i.e., If your program is for young Latina women between the ages of 18-24, will the evaluator be of the same race/cultural background? This can greatly effect the outcome).

Ask for help: Many groups and organizations feel that they have to monitor and evaluate all on their own. This also isn’t true. I got my start in monitoring and evaluation after college, when I was a graduate research assistant at the Morehouse School of Medicine’s Prevention Research Center, then later on as a  graduate student at the Columbia University School of Social Work. In fact, some of the best ways to conduct evaluations is to solicit the assistance of college students, graduate students or volunteers who are eager to gain experience. You can also contact your local college or university, as there are many professors who do this type of work. There are also consultants like myself who are willing and able to assist as well.

Share your findings (and tailor it to your recipients): Sharing your findings is a great way to generate support and buy-in from your community. It’s not just you or your organization that are the only stakeholders. Other stakeholders include funders, policy makers, community leaders, schools, and clients. There are a variety of ways to share what you’ve learned, including fact sheets, social media, public forums, TV, and evaluation reports. You have to tailor how you share your findings to your audience. For example, an evaluation report is probably best suited for funders or executive directors.

Embrace the hiccups: It’s not the end of the world if you discover that your workshop or program wasn’t as effective as you thought it would be. Rather than starting over from scratch, analyze the elements that you’ve gotten the most hiccups on based on feedback (oral feedback, post evaluation, etc.) Try seeing if that activity can be placed in another section of the program or workshop before you scrap it altogether. Really pay attention to the feedback you get from participants. If you’re implementing your program or workshop to the same type of participants and different groups make the same recommendation for how to improve, it may mean that this is a key area to look into. You can always turn a hiccup into a strength.

Monitoring and evaluation can definitely be overwhelming, but not impossible. It can make the difference between a successful program or workshop and one that’s not. Next week, I’ll share with you an evaluation technique that many groups and organizations have found extremely useful, and even a lot of fun!

Ready to delve more deeply in strategically improving your programs and workshops? Contact me to learn more about my consulting services.

If you like this post, subscribe to the Raise Your Voice newsletter to receive resources, advice, and tips to help you raise your voice for women and girls of color.
By | 2016-10-25T01:48:10+00:00 November 21st, 2012|Categories: Program Design & Evaluation|Tags: , |0 Comments