5 Oct, 2018

Ask Nicole: What Exactly Are You Evaluating?

By |2021-08-19T20:18:45-04:00October 5th, 2018|Categories: Research & Evaluation|Tags: , , |0 Comments

Have a question you’d like to be featured? Let me know.

I recently had a video meeting with one of my client organizations. We’re preparing for a presentation in a few weeks to orient some members of her staff to a newly-developed evaluation working group. They will be working directly with me on guiding the organization through the development of an evaluation framework for its programs and strategies, guided by the organization’s current strategic plan.

As we planned out the agenda and what topics to include, the staff member and I discussed various aspects of the evaluation design process, including logic models, theories of change, data collection and dissemination. In this discussion I touched on one aspect of the evaluation process that many would see as a given, but it’s actually more complex:

What exactly are you evaluating?

We ordinarily associate evaluation with the ending of a program, where we want to collect data to find out if what the program set out to do actually achieved its goals. But you can also evaluate the program as it’s being developed, or even evaluate if the program is appropriate enough to implement. 

While there are multiple evaluation theories, there are five common types of evaluation:

(more…)

22 Aug, 2018

Tailor Your Strategy to Capture Your Stakeholders’ Attention

By |2021-08-19T20:17:35-04:00August 22nd, 2018|Categories: Research & Evaluation|Tags: , |0 Comments

Whether you’re a nonprofit, community group, foundation, agency, school (or somewhere in between), you have stakeholders: people who are impacted–directly and indirectly–by the success and outcomes of your programs and strategies.

You should have a plan in place for how you engage your stakeholders with the information you want to share. Before you can create your plan, let’s identify your three stakeholder types.

Step 1: Identify your stakeholders

Each stakeholder has a particular set of needs and wants, with levels of influence and varied interests. This can differ greatly across programs and strategies. Choose a program or strategy your currently implementing, and identify all possible stakeholders for that program or strategy. Next, break them down into these stakeholder categories:

Primary A primary stakeholder is the group that most closely touches the program or strategy. For example, one of my past evaluation projects was for a local nonprofit. They wanted to conduct an internal evaluation to discover reasons for low volunteer engagement. Volunteers–both active and inactive–would be considered primary stakeholders.

Secondary Secondary stakeholders are indirectly affected by the outcomes of a program or strategy. They serve as intermediaries. With our example above, the staff (both organizational and the clinic staff the volunteers worked for) can be secondary stakeholders.

Tertiary Tertiary stakeholders are usually far removed from the impact of the program or strategy’s outcomes, but they can serve in an advisory capacity. In our example, the board of directors would be concerned a tertiary stakeholder.

Where you stakeholder falls depends on the program or strategy. In other words, a primary stakeholder for one program can turn into a tertiary stakeholder for another program.

Step 2: Use the Five Ws (and the H)

Now, let’s figure out how to engage your stakeholders based. And what better way to determine how to engage your stakeholders than using the Five Ws (and the H)?

(more…)

8 Aug, 2018

Try This: Curb Your Evaluation Anxiety

By |2021-08-19T20:15:34-04:00August 8th, 2018|Categories: Research & Evaluation|Tags: , , |0 Comments


With my program and evaluation clients, I offer the Evaluation Capacity Measure. It’s an assessment created by Ann Price of Community Evaluation Solutions , and I’ve revised it a bit to include some open-ended questions. The assessment assesses an organization’s current capacity to evaluate its programs, strategies and services and assesses its current support around evaluative thinking. Everyone from organizational leadership to board of director members are encouraged to take the assessment.

This assessment helps me to see what the current needs are, and it also gives clients a sense of what we can work on together in the evaluation capacity building process. I offer it to clients first as a baseline, then towards the end of our partnership together. Whether it’s a short-term or long-term project, clients like knowing their staff’s general views on evaluation, and how the process can be aligned with their respective roles and organizational mission and strategy. (At times, they may notice that what they expected to score lower on is higher than expected, and vice versa).

I recently administered the assessment for a new client, whose staff works remotely and is comprised of several departments that work individually and collectively on several programs and strategies. Leading up to sending the assessment to her colleagues, the staff member leading the project with me shared her nervousness about what the results may reveal about her staff, but is excited to see how the results can influence how they can proceed with their evaluation priorities.

The goal of program evaluation is to 1) see if your program is performing in the way it’s intended to and 2) facilitate organizational learning and improvement.  In general, people tend to experience anxiety when they’re being evaluated. It’s like when you waited in a line with your classmates to see which kickball team captain would choose you for their team. You want the best players on your team, and the captains are sizing you up. Who is known to kick the farthest? Who has the capability to catch the ball mid-air? Who can run the fastest?

And when you’re last to be picked, it makes you feel a way. “Why wasn’t I one of the earlier picks?”, you asked. More often the not, the team captains probably chose their team based on who they’re friends with, but there’s a lot of emotional reactions that may come up.

You get into the real world of adulting, and the feeling is still there. This time, it surfaces when you’re meeting with your supervisor for you mid-year or annual review. You think you’re performing well in most areas, until you discover (through someone else’s perspective) that you’re not.

In “Strategies for Managing Evaluation Anxiety: Toward a Psychology of Program Evaluation” (American Journal of Evaluation, Vol. 23 Issue 3, 2002), Stewart Donaldson, Laura E. Googler, Michael Striven dubbed the term “excessive evaluation anxiety” (or XEA). Symptoms of XEA include:

  • Lack of access to important information and data
  • Compliance and cooperation problems
  • False reporting
  • Effects on bias and validity
  • Reduced utilization of evaluation findings.

This can lead to stakeholders behaving in ways that can destroy the credibility of evaluation findings (and of evaluators). When the findings of a program’s evaluation yield that it’s not performing in the way it was intended, the blame game can happen. Instead of falling victim to XEA, how can you and your staff become more open to viewing the process as a learning experience?

To give you a taste of the Evaluation Capacity Measure I give to my clients (and to help jumpstart the conversation with your staff around evaluation anxiety), here are some questions you can walk your staff through:

Here’s what you need:

  • Flip chart paper, chalkboards, or walls
  • Note cards
  • Pens or pencils 

Designate five open areas around the room. Each area will have one question at the top of the flip chart paper, chalkboard, or wall.

Give staff enough time to answer the questions thoughtfully . When time is up, have staff place their note cards under the designated question.

Next, divide up your staff so that each question has several eyes on it. Have staff place the note cards into themes, grouping similar responses together.

Last, have staff do a “walk-around”, where they can view the themes for each question. To close out the exercise, discuss the themes with staff and create an action plan to address the themes.

The steps:

Have your staff answer the following questions alone on note cards:

(more…)

22 Feb, 2018

How to Weave Storytelling with Statistics

By |2023-09-12T16:44:09-04:00February 22nd, 2018|Categories: Research & Evaluation|Tags: , , |0 Comments

Dr. Jennifer Aaker, marketing expert and professor at Stanford Graduate School of Business, once shared a story of a marketing researcher who asked students to each give a persuasive one-minute pitch to their classmates. While most students included statistics in their pitches (an average of 2.5 stats), only one student included a story in their pitch. Afterwards, the researcher asked the students to write down every idea they remembered from each pitch. 

While five percent of the students remembered a statistic, 63 percent remembered the story.

The reason? Aaker offers three:

  1. Stories are powerful tools that force people to slow down and listen.
  2. Stories influence how people see you.
  3. Stories move people from complacency to action. 

Statistics may bring attention to a cause, but stories elevate their impact. In short, stories can give numbers more credibility. 

Some people are hard numbers folks, and I get it. Especially if you’re someone that’s responsible for illustrating impact, such as a grant writer, funder, nonprofit manager. Social workers and others in the helping professional rely on hard numbers because it can lead to increases in funder for their programs and services. 

It’s easier to pull numbers. Just create a survey and send it out. 

But if the marketing researcher’s discovery is any indication, stories draw people in and have greater impact. 

Take the “identifiable victim effect”, for instance. This refers to the human tendency to offer greater sympathy and aid when a specific person is observed under hardship, compared to a vaguely defined group with the same need. The identifiable victim effects puts a “face” to a problem, causing greater impact. 

For example, last night, CNN aired a town hall featuring survivors of the mass shooting at Stoneman Douglas High School in Parkland, Florida, in which 17 people (including 14 students) were killed. Statistics from the Centers for Disease Control show that, on average, 96 people die by gun violence every day in the United States. An average of 96 people doesn’t sound like a lot, but hearing the voices of the survivors and family members during the town hall (and seeing the faces of the victims from the shooting) literally puts a face on the problem of gun violence in America more now than it ever has. 

As a follow-up to my “Who Are The People Behind the Numbers?” blog post from 2014, I wrote a Try This exercise on using storytelling as a tool for Reproductive Justice. Sharing personal stories resonate with us and helps to build powerful connections with others while also helping to build compassion, especially when we’ve never had a particular experience. 

Sometimes seeing a high percentage raises awareness; but numbers alone may not fully capture the entire picture. And there are even people who say that storytelling should replace numbers. Don’t throw out your spreadsheets and statistical software just yet, but don’t stress yourself out with figuring out how to tell the most compelling story without numbers to back it up. Instead, use storytelling to make your numbers stand out (and vice versa). Here are 5 ways to weave storytelling with statistics.

(more…)

14 Feb, 2018

Try This: Appreciative Inquiry

By |2021-08-19T20:11:01-04:00February 14th, 2018|Categories: Research & Evaluation|Tags: , |0 Comments

Have you ever noticed that when you go looking for problems, more problems tend to appear?

It’s like peeling back the layers of an onion and chopping it. It’s never-ending and your eyes water in the process.

The same goes for conducting community needs assessments. When designed to identify the pressing needs of a community, they often focus on deficits, which doesn’t do much for community morale. Continuous focus on the problem increases the likelihood of seeing the problem everywhere. This isn’t to say that communities should turn a blind eye to what’s happening, but there’s something to be said about raising awareness of this practice, as it can immobilize communities to create change.

A while back, I wrote a blog post on asset mapping as a tool for community organizing and engagement. One reason why asset mapping and similar strengths-based tools are growing in popularity is due to an increasingly mindset shift away from solely deficits-based to identifying community strengths. Whereas deficits-based practices are problem-focused, needs driven, and questions what’s missing, strengths-based practices are opportunity-focused, strengths driven, and identifies what is currently available that can be built upon.

Today, let’s look at another strengths- based practice, appreciative inquiry.

What’s appreciative inquiry?

Appreciative inquiry (AI) is strengths-based approach, developed by Dr. David Cooperrider in the 1980s. First used in organizational development and change, AI has helped institutions worldwide integrate the power of the strength-based approaches to multi-stakeholder innovation and collaborative design. It quickly gained ground in program evaluation following the 2006 release of Reframing Evaluation Through Appreciative Inquiry by Hallie Preskill and Tessie Catsambas.

AI focuses on identifying what is working well, analyzing why it is working well and then doing more of it. In other words, AI teaches us that an organization will grow in whichever direction that people in the organization focus their attention.

If this can be done in organizations, why not apply it to community change?

Here’s what you need:

(more…)

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
Go to Top