6 Jul, 2016

How Can Nonprofits Balance Positive and Negative Feedback?

By |2021-08-19T18:55:19-04:00July 6th, 2016|Categories: Public Health & Social Work|Tags: , |0 Comments

Blog Post

 

When you sit down with your supervisor for your annual review or evaluation, it can go one of two ways.

When your supervisor spends more time on areas that “need improvement”, you may walk out of her office feeling defeated.

And how do you feel when you receive a glowing review? Pretty happy. Makes you feel like you’re excelling.

If you care about what you do, you welcome praise as well as recommendations for how to improve. Too much positive and you don’t feel the need to grow. If you’ve ever asked about things that you can improve of, and you don’t receive much of a response because everything is going well, how do you feel?

I’ve had my fair share of working with executive directors and program directors who only wanted me to focus on positive outcomes. And it makes me suspicious.

Of course, you want the people invested in what you’re doing to be happy. These people—the stakeholders—can range from anyone that is impacted directly on indirectly by the programs, services, or initiatives you’ve created for them.

Stakeholders want to see what’s going well. What’s going well can mean more media, more opportunities and more funding. “Negative” findings (and I use quotation marks because negative is subjective) can also lead to more media, and a lot of nonprofits fear this. Negative findings can give the impression that things are worse off than they really are.

But too much of the positive can give the impression that nothing needs to change. Let’s face it: Some nonprofits are out here designing surveys, in-depth interviews, and focus group questions that are so biased that one can’t expect anything but positive results. And that’s not valuable either.

How can positive findings give your staff the credit they deserve, and how can you address “negative” findings in a way that allows for your stakeholders to see opportunities?

(more…)

6 Apr, 2016

When You’re Clear on What You Need, It’s Easier to Measure Your Impact

By |2021-08-19T18:52:39-04:00April 6th, 2016|Categories: Public Health & Social Work|Tags: , |2 Comments

Blog Post Title

I talked with two potential clients this week, and both ended up being great conversations in how they plan to dive deeper into what makes their programming valuable to their audiences. There were lots of aha moments—on their end as well as mine—in how they conceptualize a potential evaluation project or training for their staff, the various evaluation theories thy can draw inspiration from, and how prepared their staff is to embark on a small or large-scale evaluation project.

A few of those aha moments centered on my process for conducting an evaluation, and in how I assist clients in incorporating evaluative thinking in their work. Oftentimes, discussions on evaluation don’t come into the very end of a project, so I encourage clients at the onset of a program to think more about what value their programming is expected to have on their audience.

While I typically have potential clients complete my client questionnaire prior to speaking with me, most of the time I’ll meet a potential client in person via a networking opportunity before setting up a time to discuss further.

During these recent calls, I found that we spent most of the time discussing how I go about conducting an evaluation or setting up a staff training on aspects of evaluation and how they can compliment their project. In those conversations, I touched on three key factors  an organization needs to consider, thus impacting how to measure the value of their program:

Clarity

A potential client questionnaire allows for a client to conceptualize a potential evaluation project, and an in-person meeting or a phone call allows for deeper understanding and relationship building. Regardless of which precedes the other, clarity on what you want to do is important. One of the benefits of being an independent evaluator is that I’m able to provide objective feedback on a client’s project and outline the factors that may impact the process of the evaluation project. Another role for developing clarity is in deciding if you really need an external evaluator to take a lead on this project or if there’s another way to add more value to this process. Which leads into my second key factor. (more…)

23 Mar, 2016

“But Does It Make A Difference?”

By |2021-08-19T18:50:57-04:00March 23rd, 2016|Categories: Research & Evaluation|Tags: , |0 Comments

Blog Post Title 3-23-16

I was scrolling through my Twitter timeline a few nights ago, and came across a tweet from the American Evaluation Association’s Twitter account, highlighting a blog post from program evaluator and research designer Dr. Molly Engle of Evaluation is an Everyday Activity. Dr. Engle focused on how she starts and ends her day with gratitude, and how that gratitude extends to her work in program evaluation. What stood out the most was this quote:

Doing evaluation just for the sake of evaluating, because it would be nice to know, is not the answer. Yes, it may be nice to know; [but] does it make a difference? Does the program (policy, performance, product, project, etc.) make a difference in the lives of the participants[?]

As I’ve mentioned before, conducting an evaluation can lead to insights into how well a program is performing and what can be improved. How valuable is this program in the lives of the individuals, families, and communities you work with?

I’ve been thinking of this a lot, and how it connects to the Reproductive Justice movement and its application of the framework. I try to incorporate a gender-focused, intersectional analysis in everything I do. However, I can’t figure out the onset, but I started to burn out from the RJ movement.

I don’t see myself leaving the RJ movement anytime soon, so I began searching for another entry point into the RJ movement of the traditional ways I’ve approached the work in the past. Program design and evaluation has been a way to reinvigorate my approach to RJ.

While it doesn’t sound as “sexy” or “trendy” as RJ has becomes more mainstream, evaluation  incorporates my engagement skills as a social worker, and I’ve found a way in my business to assist organizations in thinking more critically on how they design programs and services, as they relate to social justice work. While it may not be as exciting as a rally, I use my evaluation skills to gauge how an organization thinks of their program, what assistance may be needed  to realize their vision, what their perceived “wins” (expected outcomes) are, and what those actual outcomes are.

Going back to Dr. Engle’s quote, it got me to thinking: When an organization develops a program based on the RJ framework, what are the major similarities of RJ-based programs who receive funding from major donors or foundations? Do organizations evaluate RJ programs with the same criteria as other programs based on a completely different framework?  There are plenty of theories out their related to program design and evaluation, with lots of evaluation tools to choose from. Are there are separate set of evaluation tools that we can use to evaluate RJ-based programs by, and are we evaluating these programs based on what funders deem as important, or rather what makes sense to the organization applying the RJ framework? If the evaluation tools don’t exist, what could they potentially look like?

(more…)

3 Dec, 2014

Who Are The People Behind The Numbers?

By |2021-08-19T18:41:54-04:00December 3rd, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

(Photo credit: Kaiser Family Foundation)

“Statistics are real people with the tears wiped away. When statistical data are presented, they seem sanitized and tend to distance the reader from the actual problem at hand.”  ~ Dr. B. Lee Green 

Let’s take a look at this graph, taken from the policy fact sheet “Sexual Health of Adolescents and Young Adults in the United States”, developed by the Kaiser Family Foundation.

This fact sheet provides key data on sexual activity, contraceptive use, pregnancy, prevalence of sexually transmitted infections (STIs), and access to reproductive health services among teenagers and young adults in the United States.

The chart above is taken from this fact sheet, and the data and information is listed in the 2013 Kaiser Women’s Health Survey. To list some statistics:

**70% of women 19 to 24 rated confidentiality about use of health care such as family planning or mental health services as “important”; however, the majority of girls and women were not aware that insurers may send an explanation of benefits (EOB) that documents use of medical services that have been used to the principal policy holder, who may be a parent.

**Today, 21 states and DC have policies that explicitly allow minors to consent to contraceptive services, 25 allow consent in certain circumstances, and 4 have no explicit policy;

**38 states require some level of parental involvement in a minor’s decision to have an abortion, up from 18 states in 1991. 21 states require that teens obtain parental consent for the procedure, 12 require parental notification, and 5 require both.

Of course, the correlation makes sense: the older a woman is, the higher likelihood she is aware of what a EOB is and how health insurance companies many send them by mail to her home. In fact:

One of the earliest [Affordable care Act] provisions that took effect in September 2010 was the extensions of dependent coverage to young people up to age 26, who had the highest uninsured rate of any age group at the time the law was passed. In 2013, over four in ten (45%) women ages 18 to 25 reported that they were covered on a parent’s plan as a dependent. because that are adult children, the extension of coverage has raised concerns about their ability to maintain privacy regarding the use of sensitive health services such as reproductive and sexual health care and mental health. (Kaiser Family Foundation, 2013)

I also find it interesting that the younger a woman is, the higher she is to rate confidentiality when seeking various health care services. Also the fact that only 21 states and DC allow minors complete consent to access contraceptives and that most states require some level of parental involvement in a young person’s decision to have an abortion is worth looking into, especially in states that allow young people to access contraception without parental consent.

But we’re not here to talk about completely about the statistics. And we’re not here to provide a full-on critique of policy fact sheet.

(more…)

1 Oct, 2014

10 Common Mistakes that Keep Respondents from Completing Your Survey

By |2021-08-19T18:39:34-04:00October 1st, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

Developing survey questions is harder than it looks. Asking questions is easy, but asking direct, unbiased, and valid questions is more of an art form. There’s a lot that goes into it, including flow, how the questions tie into what your program evaluation wants to answer, and keeping your respondents engage enough to complete the survey.

Here are 10 common mistakes and my tips for avoiding them:

Not knowing your target audience: Understanding who your audience is can help you craft survey questions that are pertinent to them. Avoid using words or phrases that your respondents may not know the meaning of. Instead, use words and phrases that are tailored to your target audience. Are your surveying nurses, social workers, or teachers? It’s ok to use words or phrases that are most common to those target audiences. On the other hand, if you’re not sure if your audience will understand what you mean by “reproductive justice”, it’s best to gather insights from the program coordinator or workshop facilitator to see if this term has been discussed.

Not explaining WHY: Believe it or not, most respondents are willing to help you if you share the value in completing your survey. When a respondent knows what’s in it for them, there is likelihood that the survey gets completed. If respondents know that their responses can aid in determining pay raises or in the restructuring of an under-performing program’s activities you’re more likely to complete it. If an incentive (i.e. a gift card to the respondent’s favorite retail store, coffee shop, or to wherever Visa is accepted) is included when a respondent completes your survey, indicate that on your survey at the very beginning before respondents begin.

Including extensive demographic questions: When you ask too many demographic questions, it can result in taking up a lot of room that could have been used for other questions. Before you add in questions to gather information on a respondent’s income level, religion, socio-economical status, etc., consider if it’s appropriate and relevant to the overall survey and the basis of the evaluation. Also, unless the rest of your survey depends on these answers, consider leaving demographic questions for the end of the survey as they tend to be the more uninteresting parts of a survey for respondents to complete.

Asking too many questions: Tying into the second point, asking too many questions can be the downfall of your survey. There are a variety of question types—open-ended, multiple choice, Likert or interval (very satisfied, satisfied, neutral, dissatisfied, very dissatisfied), ratio (“How many days do you spend studying?”), and dichotomous (true/false, yes/no, agree/disagree)—but it’s more about the intent behind the question. My recommendation is to create a survey that can have up to 15 questions. Keep in mind that engagement levels wane, especially during an online survey where there are more distractions (i.e., social media, videos, online shopping, etc.) (more…)

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
Go to Top