19 Apr, 2019

Try This: Put the Pieces Together

By |2021-08-19T20:31:30-04:00April 19th, 2019|Categories: Workshop, Program, & Curriculum Design|Tags: , , |0 Comments

Get clear on your program’s purpose and everyone’s role.

During one of my first major evaluations, I asked the client if I could view the program’s curriculum. As I looked through it, I asked how often the curriculum is revised based on participant feedback.

The program had been around for some time, and while the client was consistent with evaluating the program and drawing out the positive feedback to share with funders, they hadn’t used the feedback to revise the actual program.

During another site visit for this program, I noticed that staff were familiar with certain aspects of the program, but not the program in its entirety, or who was responsible for what. Understandable when you’re dealing with newer versus more seasoned staff. Also, some staff were confused about the purpose of some activities and when each activity was supposed to take place.

I’ve always been of the mindset that, in order to create or revise data collection and analysis tools and processes, you need to have a general sense of program goals and how the program is supposed to function. This is important as staff will come and go. Also, there should be a process built in for revising the program so that it’s meeting the emerging needs of participants.

Similar to putting together a puzzle, it can be daunting to look at the multiple components of a program. Instead of working on the entire puzzle at once, focus on one section at a time so that it all comes together.

This activity is ideal for:

  • Staff responsible for developing and overseeing the implementation of programs, services, and strategies

Here’s what you need:

  • Your program’s logic model
  • Your program curriculum
  • Bonus: Your most recent program data that gives you insight about the program (compiled and synthesized data)

The process:

Typically with my Try This exercises, I lay out all the steps. For this exercise, I’m going to leave that up to you.

To frame it, there are three parts to this process. First, review your program’s goals and objectives. Second, look at your staff roles to assess 1) whether everyone currently connected to the program is being utilized in ways that align with the programs and objectives, 2) if those involved are utilizing their expertise, and 3) who is responsible for what. Third, review the program’s activities to assess if these activities currently align with the goals and objectives.

(Also, it should go without saying that “program” can also mean service, workshop, training, initiative, strategy, and so forth).

Here are some guiding questions (and feel free to add more):

(more…)
5 Oct, 2018

Ask Nicole: What Exactly Are You Evaluating?

By |2021-08-19T20:18:45-04:00October 5th, 2018|Categories: Research & Evaluation|Tags: , , |0 Comments

Have a question you’d like to be featured? Let me know.

I recently had a video meeting with one of my client organizations. We’re preparing for a presentation in a few weeks to orient some members of her staff to a newly-developed evaluation working group. They will be working directly with me on guiding the organization through the development of an evaluation framework for its programs and strategies, guided by the organization’s current strategic plan.

As we planned out the agenda and what topics to include, the staff member and I discussed various aspects of the evaluation design process, including logic models, theories of change, data collection and dissemination. In this discussion I touched on one aspect of the evaluation process that many would see as a given, but it’s actually more complex:

What exactly are you evaluating?

We ordinarily associate evaluation with the ending of a program, where we want to collect data to find out if what the program set out to do actually achieved its goals. But you can also evaluate the program as it’s being developed, or even evaluate if the program is appropriate enough to implement. 

While there are multiple evaluation theories, there are five common types of evaluation:

(more…)

8 Aug, 2018

Try This: Curb Your Evaluation Anxiety

By |2021-08-19T20:15:34-04:00August 8th, 2018|Categories: Research & Evaluation|Tags: , , |0 Comments


With my program and evaluation clients, I offer the Evaluation Capacity Measure. It’s an assessment created by Ann Price of Community Evaluation Solutions , and I’ve revised it a bit to include some open-ended questions. The assessment assesses an organization’s current capacity to evaluate its programs, strategies and services and assesses its current support around evaluative thinking. Everyone from organizational leadership to board of director members are encouraged to take the assessment.

This assessment helps me to see what the current needs are, and it also gives clients a sense of what we can work on together in the evaluation capacity building process. I offer it to clients first as a baseline, then towards the end of our partnership together. Whether it’s a short-term or long-term project, clients like knowing their staff’s general views on evaluation, and how the process can be aligned with their respective roles and organizational mission and strategy. (At times, they may notice that what they expected to score lower on is higher than expected, and vice versa).

I recently administered the assessment for a new client, whose staff works remotely and is comprised of several departments that work individually and collectively on several programs and strategies. Leading up to sending the assessment to her colleagues, the staff member leading the project with me shared her nervousness about what the results may reveal about her staff, but is excited to see how the results can influence how they can proceed with their evaluation priorities.

The goal of program evaluation is to 1) see if your program is performing in the way it’s intended to and 2) facilitate organizational learning and improvement.  In general, people tend to experience anxiety when they’re being evaluated. It’s like when you waited in a line with your classmates to see which kickball team captain would choose you for their team. You want the best players on your team, and the captains are sizing you up. Who is known to kick the farthest? Who has the capability to catch the ball mid-air? Who can run the fastest?

And when you’re last to be picked, it makes you feel a way. “Why wasn’t I one of the earlier picks?”, you asked. More often the not, the team captains probably chose their team based on who they’re friends with, but there’s a lot of emotional reactions that may come up.

You get into the real world of adulting, and the feeling is still there. This time, it surfaces when you’re meeting with your supervisor for you mid-year or annual review. You think you’re performing well in most areas, until you discover (through someone else’s perspective) that you’re not.

In “Strategies for Managing Evaluation Anxiety: Toward a Psychology of Program Evaluation” (American Journal of Evaluation, Vol. 23 Issue 3, 2002), Stewart Donaldson, Laura E. Googler, Michael Striven dubbed the term “excessive evaluation anxiety” (or XEA). Symptoms of XEA include:

  • Lack of access to important information and data
  • Compliance and cooperation problems
  • False reporting
  • Effects on bias and validity
  • Reduced utilization of evaluation findings.

This can lead to stakeholders behaving in ways that can destroy the credibility of evaluation findings (and of evaluators). When the findings of a program’s evaluation yield that it’s not performing in the way it was intended, the blame game can happen. Instead of falling victim to XEA, how can you and your staff become more open to viewing the process as a learning experience?

To give you a taste of the Evaluation Capacity Measure I give to my clients (and to help jumpstart the conversation with your staff around evaluation anxiety), here are some questions you can walk your staff through:

Here’s what you need:

  • Flip chart paper, chalkboards, or walls
  • Note cards
  • Pens or pencils 

Designate five open areas around the room. Each area will have one question at the top of the flip chart paper, chalkboard, or wall.

Give staff enough time to answer the questions thoughtfully . When time is up, have staff place their note cards under the designated question.

Next, divide up your staff so that each question has several eyes on it. Have staff place the note cards into themes, grouping similar responses together.

Last, have staff do a “walk-around”, where they can view the themes for each question. To close out the exercise, discuss the themes with staff and create an action plan to address the themes.

The steps:

Have your staff answer the following questions alone on note cards:

(more…)

6 Jul, 2016

How Can Nonprofits Balance Positive and Negative Feedback?

By |2021-08-19T18:55:19-04:00July 6th, 2016|Categories: Public Health & Social Work|Tags: , |0 Comments

Blog Post

 

When you sit down with your supervisor for your annual review or evaluation, it can go one of two ways.

When your supervisor spends more time on areas that “need improvement”, you may walk out of her office feeling defeated.

And how do you feel when you receive a glowing review? Pretty happy. Makes you feel like you’re excelling.

If you care about what you do, you welcome praise as well as recommendations for how to improve. Too much positive and you don’t feel the need to grow. If you’ve ever asked about things that you can improve of, and you don’t receive much of a response because everything is going well, how do you feel?

I’ve had my fair share of working with executive directors and program directors who only wanted me to focus on positive outcomes. And it makes me suspicious.

Of course, you want the people invested in what you’re doing to be happy. These people—the stakeholders—can range from anyone that is impacted directly on indirectly by the programs, services, or initiatives you’ve created for them.

Stakeholders want to see what’s going well. What’s going well can mean more media, more opportunities and more funding. “Negative” findings (and I use quotation marks because negative is subjective) can also lead to more media, and a lot of nonprofits fear this. Negative findings can give the impression that things are worse off than they really are.

But too much of the positive can give the impression that nothing needs to change. Let’s face it: Some nonprofits are out here designing surveys, in-depth interviews, and focus group questions that are so biased that one can’t expect anything but positive results. And that’s not valuable either.

How can positive findings give your staff the credit they deserve, and how can you address “negative” findings in a way that allows for your stakeholders to see opportunities?

(more…)

6 Apr, 2016

When You’re Clear on What You Need, It’s Easier to Measure Your Impact

By |2021-08-19T18:52:39-04:00April 6th, 2016|Categories: Public Health & Social Work|Tags: , |2 Comments

Blog Post Title

I talked with two potential clients this week, and both ended up being great conversations in how they plan to dive deeper into what makes their programming valuable to their audiences. There were lots of aha moments—on their end as well as mine—in how they conceptualize a potential evaluation project or training for their staff, the various evaluation theories thy can draw inspiration from, and how prepared their staff is to embark on a small or large-scale evaluation project.

A few of those aha moments centered on my process for conducting an evaluation, and in how I assist clients in incorporating evaluative thinking in their work. Oftentimes, discussions on evaluation don’t come into the very end of a project, so I encourage clients at the onset of a program to think more about what value their programming is expected to have on their audience.

While I typically have potential clients complete my client questionnaire prior to speaking with me, most of the time I’ll meet a potential client in person via a networking opportunity before setting up a time to discuss further.

During these recent calls, I found that we spent most of the time discussing how I go about conducting an evaluation or setting up a staff training on aspects of evaluation and how they can compliment their project. In those conversations, I touched on three key factors  an organization needs to consider, thus impacting how to measure the value of their program:

Clarity

A potential client questionnaire allows for a client to conceptualize a potential evaluation project, and an in-person meeting or a phone call allows for deeper understanding and relationship building. Regardless of which precedes the other, clarity on what you want to do is important. One of the benefits of being an independent evaluator is that I’m able to provide objective feedback on a client’s project and outline the factors that may impact the process of the evaluation project. Another role for developing clarity is in deciding if you really need an external evaluator to take a lead on this project or if there’s another way to add more value to this process. Which leads into my second key factor. (more…)

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
Go to Top