2 Nov, 2016

Ask Nicole: What’s the Best Way to Deliver Bad News?

By |2021-08-19T18:57:13-04:00November 2nd, 2016|Categories: Research & Evaluation|Tags: , |0 Comments

ig-post-3

If you have a question that you’d like to share with the Raise Your Voice community , contact me. 

It’s the worst thing ever. That moment when you’ve been working with a client, community members, or some other form of stakeholder, and you have to bring the bad news.

I recently got this email from a nonprofit professional (and FYI: I’ve removed identifying information):

My nonprofit has created a program that seeks to increase the importance of physical activity among young indigenous youth in a rural community where there’s a lack of access to gyms and other places that would make it easier for youth to be more active. The stakeholders were expecting that the activities included in the program would resonate with the youth. In my nonprofit, I’ve been charged to carry out an evaluation of this program. We used surveys and focus groups with the youth participants. The results of the evaluation were that the participants weren’t interested in the activities, which aligned with the lack of participation. In fact, the results showed that the participants have developed more creative means to get in physical activity, but they brought up the need for other quality of life services that the program wasn’t addressing. The results could potentially impact the funding that was given to this program, as the funders were expecting that the program would be a success. What’s the best way to handle this?

Dealing with funders and leadership can be tricky, and nonprofits know all too well the stress of proving that a program or service is successful to stakeholders.

So, how do you share unexpected results in a way that is diplomatic and addresses concern head on?

Make it participatory from the start

I’ve worked with clients who had the expectation that I would come in, ready to go, with all the surveys, focus group questions, and in-depth interviews scheduled. They just want someone to come in and do the work for them. When I noticed this happening, I began to push back against working with clients in this way, and in encouraging current clients and potential clients in developing a participatory way of working together. From determining data collection tools to developing questions to ask participants (and even getting everyone together to interpret the data), when you make feedback gathering participatory from the start, it creates buy-in, puts everyone on the same page, and makes everything more transparent. When people are more involved, it makes this process more fun (at least for me), and everyone learns in the process.

And here’s a secret: When you make it participatory, it improves the likelihood that recommendations from the evaluation are actually implemented.

Address expectations and potential consequences 

When you ask your stakeholders what they intend the outcome of their program to be, also ask this:

“What if what we’re expecting doesn’t happen?”

Ideally, we create programs or services based on theory, research, and what’s happen in our community. It builds the foundation to do some meaningful work. Can you believe there are nonprofits actually create programs or services because it just sounds like a good idea? You’d be surprised. So, can we really feel some type of way when we get results that we weren’t expecting, and in the case of the nonprofit above, it sounds like a program was created to address a need that the community has already dealt with.

But when we follow the theory, research, and community input, yet the outcome is still not what we’re expecting?

Determine if it really is bad

(more…)

23 Mar, 2016

“But Does It Make A Difference?”

By |2021-08-19T18:50:57-04:00March 23rd, 2016|Categories: Research & Evaluation|Tags: , |0 Comments

Blog Post Title 3-23-16

I was scrolling through my Twitter timeline a few nights ago, and came across a tweet from the American Evaluation Association’s Twitter account, highlighting a blog post from program evaluator and research designer Dr. Molly Engle of Evaluation is an Everyday Activity. Dr. Engle focused on how she starts and ends her day with gratitude, and how that gratitude extends to her work in program evaluation. What stood out the most was this quote:

Doing evaluation just for the sake of evaluating, because it would be nice to know, is not the answer. Yes, it may be nice to know; [but] does it make a difference? Does the program (policy, performance, product, project, etc.) make a difference in the lives of the participants[?]

As I’ve mentioned before, conducting an evaluation can lead to insights into how well a program is performing and what can be improved. How valuable is this program in the lives of the individuals, families, and communities you work with?

I’ve been thinking of this a lot, and how it connects to the Reproductive Justice movement and its application of the framework. I try to incorporate a gender-focused, intersectional analysis in everything I do. However, I can’t figure out the onset, but I started to burn out from the RJ movement.

I don’t see myself leaving the RJ movement anytime soon, so I began searching for another entry point into the RJ movement of the traditional ways I’ve approached the work in the past. Program design and evaluation has been a way to reinvigorate my approach to RJ.

While it doesn’t sound as “sexy” or “trendy” as RJ has becomes more mainstream, evaluation  incorporates my engagement skills as a social worker, and I’ve found a way in my business to assist organizations in thinking more critically on how they design programs and services, as they relate to social justice work. While it may not be as exciting as a rally, I use my evaluation skills to gauge how an organization thinks of their program, what assistance may be needed  to realize their vision, what their perceived “wins” (expected outcomes) are, and what those actual outcomes are.

Going back to Dr. Engle’s quote, it got me to thinking: When an organization develops a program based on the RJ framework, what are the major similarities of RJ-based programs who receive funding from major donors or foundations? Do organizations evaluate RJ programs with the same criteria as other programs based on a completely different framework?  There are plenty of theories out their related to program design and evaluation, with lots of evaluation tools to choose from. Are there are separate set of evaluation tools that we can use to evaluate RJ-based programs by, and are we evaluating these programs based on what funders deem as important, or rather what makes sense to the organization applying the RJ framework? If the evaluation tools don’t exist, what could they potentially look like?

(more…)

12 Mar, 2015

Ask Nicole: What’s the Difference Between Research and Evaluation?

By |2021-08-19T18:43:13-04:00March 12th, 2015|Categories: Research & Evaluation|Tags: , , |0 Comments

image

Do you have any questions related to social work, evaluation, reproductive justice? Curious about how I feel about a particular topic? Contact me and I’ll answer it!

This is probably the most common question you’ll hear about evaluation practice. Because I’m asked this question often, I would like to given my take on it.

To start, there are several differences between research and evaluation. Evaluation is a systematic way of figuring out how effective your programs and services are, and if the desired outcomes of the program/service line up with what participants are experiencing. You can do this in a variety of ways, including surveys, focus groups, interviews, and more. Evaluation can inform key stakeholders (which can include legislators, program participants, funders, nonprofit staff, etc.) how sustainable your program or service is.

In comparison, research is designed to seek new knowledge about a behavior or phenomenon and focuses on the methods of getting to that new knowledge (hypothesis, independent/dependent variables, etc.). In other words, research wants to know if a particular variable caused a particular effect (causation). Once testing is done, researchers can make research recommendations and publish their findings. However, one of the key differences between research and evaluation is that conducting an evaluation can lead to insights in what’s going well and what can be improved. In other words, evaluation shows how valuable your program or service is.

(more…)

3 Dec, 2014

Who Are The People Behind The Numbers?

By |2021-08-19T18:41:54-04:00December 3rd, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

(Photo credit: Kaiser Family Foundation)

“Statistics are real people with the tears wiped away. When statistical data are presented, they seem sanitized and tend to distance the reader from the actual problem at hand.”  ~ Dr. B. Lee Green 

Let’s take a look at this graph, taken from the policy fact sheet “Sexual Health of Adolescents and Young Adults in the United States”, developed by the Kaiser Family Foundation.

This fact sheet provides key data on sexual activity, contraceptive use, pregnancy, prevalence of sexually transmitted infections (STIs), and access to reproductive health services among teenagers and young adults in the United States.

The chart above is taken from this fact sheet, and the data and information is listed in the 2013 Kaiser Women’s Health Survey. To list some statistics:

**70% of women 19 to 24 rated confidentiality about use of health care such as family planning or mental health services as “important”; however, the majority of girls and women were not aware that insurers may send an explanation of benefits (EOB) that documents use of medical services that have been used to the principal policy holder, who may be a parent.

**Today, 21 states and DC have policies that explicitly allow minors to consent to contraceptive services, 25 allow consent in certain circumstances, and 4 have no explicit policy;

**38 states require some level of parental involvement in a minor’s decision to have an abortion, up from 18 states in 1991. 21 states require that teens obtain parental consent for the procedure, 12 require parental notification, and 5 require both.

Of course, the correlation makes sense: the older a woman is, the higher likelihood she is aware of what a EOB is and how health insurance companies many send them by mail to her home. In fact:

One of the earliest [Affordable care Act] provisions that took effect in September 2010 was the extensions of dependent coverage to young people up to age 26, who had the highest uninsured rate of any age group at the time the law was passed. In 2013, over four in ten (45%) women ages 18 to 25 reported that they were covered on a parent’s plan as a dependent. because that are adult children, the extension of coverage has raised concerns about their ability to maintain privacy regarding the use of sensitive health services such as reproductive and sexual health care and mental health. (Kaiser Family Foundation, 2013)

I also find it interesting that the younger a woman is, the higher she is to rate confidentiality when seeking various health care services. Also the fact that only 21 states and DC allow minors complete consent to access contraceptives and that most states require some level of parental involvement in a young person’s decision to have an abortion is worth looking into, especially in states that allow young people to access contraception without parental consent.

But we’re not here to talk about completely about the statistics. And we’re not here to provide a full-on critique of policy fact sheet.

(more…)

1 Oct, 2014

10 Common Mistakes that Keep Respondents from Completing Your Survey

By |2021-08-19T18:39:34-04:00October 1st, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

Developing survey questions is harder than it looks. Asking questions is easy, but asking direct, unbiased, and valid questions is more of an art form. There’s a lot that goes into it, including flow, how the questions tie into what your program evaluation wants to answer, and keeping your respondents engage enough to complete the survey.

Here are 10 common mistakes and my tips for avoiding them:

Not knowing your target audience: Understanding who your audience is can help you craft survey questions that are pertinent to them. Avoid using words or phrases that your respondents may not know the meaning of. Instead, use words and phrases that are tailored to your target audience. Are your surveying nurses, social workers, or teachers? It’s ok to use words or phrases that are most common to those target audiences. On the other hand, if you’re not sure if your audience will understand what you mean by “reproductive justice”, it’s best to gather insights from the program coordinator or workshop facilitator to see if this term has been discussed.

Not explaining WHY: Believe it or not, most respondents are willing to help you if you share the value in completing your survey. When a respondent knows what’s in it for them, there is likelihood that the survey gets completed. If respondents know that their responses can aid in determining pay raises or in the restructuring of an under-performing program’s activities you’re more likely to complete it. If an incentive (i.e. a gift card to the respondent’s favorite retail store, coffee shop, or to wherever Visa is accepted) is included when a respondent completes your survey, indicate that on your survey at the very beginning before respondents begin.

Including extensive demographic questions: When you ask too many demographic questions, it can result in taking up a lot of room that could have been used for other questions. Before you add in questions to gather information on a respondent’s income level, religion, socio-economical status, etc., consider if it’s appropriate and relevant to the overall survey and the basis of the evaluation. Also, unless the rest of your survey depends on these answers, consider leaving demographic questions for the end of the survey as they tend to be the more uninteresting parts of a survey for respondents to complete.

Asking too many questions: Tying into the second point, asking too many questions can be the downfall of your survey. There are a variety of question types—open-ended, multiple choice, Likert or interval (very satisfied, satisfied, neutral, dissatisfied, very dissatisfied), ratio (“How many days do you spend studying?”), and dichotomous (true/false, yes/no, agree/disagree)—but it’s more about the intent behind the question. My recommendation is to create a survey that can have up to 15 questions. Keep in mind that engagement levels wane, especially during an online survey where there are more distractions (i.e., social media, videos, online shopping, etc.) (more…)

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
Go to Top