This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
8 Mar, 2017

Try This: Ask Better Questions

By | 2017-04-12T11:39:00+00:00 March 8th, 2017|Categories: Program Design & Evaluation|Tags: , , , |0 Comments

Starting today, we’re going to ask better questions. Questions that allow you to dig deeper to unearth richer experiences. This is crucial in gaining a better understanding of why someone keeps (or stops) coming back to your programs, products or services.

When I say “dig deeper”, what I’m getting at is being strategic in how we ask questions. There’s a difference between asking questions that allow you to truly hear what someone is saying, and asking questions because you’re searching for certain types of responses.

Digging deeper, goes beyond “I love it!” or “I wouldn’t change a thing”.  People are coming back to you for a reason, and these reasons can help you enhance what you’re offering, and can also inspire you to come up with creative and engaging solutions to address other needs that you’re currently not addressing.

Tips and examples 

Good questions are:

  • Unbiased
  • Empowering
  • Provide a safe space for the person to feel comfortable responding to
  • Stretch the person who is responding 

I’ve highlighted the last point for a reason. Here’s an example:

Back in 2015, I facilitated a few focus groups for a client, a nonprofit that provides social justice oriented feminist leadership for young women of color. The focus groups were for the organization’s 6-week summer leadership program for young women of color in the New York City area. The organization wanted to know, among  other things, how effective the program had been that summer.

Okay, sounds easy. I did a few site visits during the 5th week of the program to facilitate the focus groups. I had my questions ready based on the evaluation questions the organization sought out to explore. During the first focus group, I asked “Looking back on everything you’ve learned during the past 5 weeks, can you share something that you would change?” Some of the responses I got looked similar to “I loved everything!” or “I wouldn’t change a thing” or “Everything was good”.

Initially, I chalked it up to the participants being teenagers. Then I realized they were responding this way because of HOW I asked the question.

So, I tried a different approach for the second and third focus groups:

Looking back on everything you’ve learned during this program, if you could rebuild this program from the ground up, based on your own needs and interests, what would it look like? 

(more…)

6 Apr, 2016

When You’re Clear on What You Need, It’s Easier to Measure Your Impact

By | 2017-01-03T23:25:47+00:00 April 6th, 2016|Categories: Program Design & Evaluation|Tags: , , , , , , |2 Comments

Blog Post Title

I talked with two potential clients this week, and both ended up being great conversations in how they plan to dive deeper into what makes their programming valuable to their audiences. There were lots of aha moments—on their end as well as mine—in how they conceptualize a potential evaluation project or training for their staff, the various evaluation theories thy can draw inspiration from, and how prepared their staff is to embark on a small or large-scale evaluation project.

A few of those aha moments centered on my process for conducting an evaluation, and in how I assist clients in incorporating evaluative thinking in their work. Oftentimes, discussions on evaluation don’t come into the very end of a project, so I encourage clients at the onset of a program to think more about what value their programming is expected to have on their audience.

While I typically have potential clients complete my client questionnaire prior to speaking with me, most of the time I’ll meet a potential client in person via a networking opportunity before setting up a time to discuss further.

During these recent calls, I found that we spent most of the time discussing how I go about conducting an evaluation or setting up a staff training on aspects of evaluation and how they can compliment their project. In those conversations, I touched on three key factors  an organization needs to consider, thus impacting how to measure the value of their program:

Clarity

A potential client questionnaire allows for a client to conceptualize a potential evaluation project, and an in-person meeting or a phone call allows for deeper understanding and relationship building. Regardless of which precedes the other, clarity on what you want to do is important. One of the benefits of being an independent evaluator is that I’m able to provide objective feedback on a client’s project and outline the factors that may impact the process of the evaluation project. Another role for developing clarity is in deciding if you really need an external evaluator to take a lead on this project or if there’s another way to add more value to this process. Which leads into my second key factor. (more…)

12 Mar, 2015

Ask Nicole: What’s the Difference Between Research and Evaluation?

By | 2016-10-25T01:48:00+00:00 March 12th, 2015|Categories: Program Design & Evaluation|Tags: , , , , , , |0 Comments

image

Do you have any questions related to social work, evaluation, reproductive justice? Curious about how I feel about a particular topic? Contact me and I’ll answer it!

This is probably the most common question you’ll hear about evaluation practice. Because I’m asked this question often, I would like to given my take on it.

To start, there are several differences between research and evaluation. Evaluation is a systematic way of figuring out how effective your programs and services are, and if the desired outcomes of the program/service line up with what participants are experiencing. You can do this in a variety of ways, including surveys, focus groups, interviews, and more. Evaluation can inform key stakeholders (which can include legislators, program participants, funders, nonprofit staff, etc.) how sustainable your program or service is.

In comparison, research is designed to seek new knowledge about a behavior or phenomenon and focuses on the methods of getting to that new knowledge (hypothesis, independent/dependent variables, etc.). In other words, research wants to know if a particular variable caused a particular effect (causation). Once testing is done, researchers can make research recommendations and publish their findings. However, one of the key differences between research and evaluation is that conducting an evaluation can lead to insights in what’s going well and what can be improved. In other words, evaluation shows how valuable your program or service is.

(more…)

1 Oct, 2014

10 Common Mistakes that Keep Respondents from Completing Your Survey

By | 2016-10-25T01:48:01+00:00 October 1st, 2014|Categories: Program Design & Evaluation|Tags: , , , |0 Comments

image

Developing survey questions is harder than it looks. Asking questions is easy, but asking direct, unbiased, and valid questions is more of an art form. There’s a lot that goes into it, including flow, how the questions tie into what your program evaluation wants to answer, and keeping your respondents engage enough to complete the survey.

Here are 10 common mistakes and my tips for avoiding them:

Not knowing your target audience: Understanding who your audience is can help you craft survey questions that are pertinent to them. Avoid using words or phrases that your respondents may not know the meaning of. Instead, use words and phrases that are tailored to your target audience. Are your surveying nurses, social workers, or teachers? It’s ok to use words or phrases that are most common to those target audiences. On the other hand, if you’re not sure if your audience will understand what you mean by “reproductive justice”, it’s best to gather insights from the program coordinator or workshop facilitator to see if this term has been discussed.

Not explaining WHY: Believe it or not, most respondents are willing to help you if you share the value in completing your survey. When a respondent knows what’s in it for them, there is likelihood that the survey gets completed. If respondents know that their responses can aid in determining pay raises or in the restructuring of an under-performing program’s activities you’re more likely to complete it. If an incentive (i.e. a gift card to the respondent’s favorite retail store, coffee shop, or to wherever Visa is accepted) is included when a respondent completes your survey, indicate that on your survey at the very beginning before respondents begin.

Including extensive demographic questions: When you ask too many demographic questions, it can result in taking up a lot of room that could have been used for other questions. Before you add in questions to gather information on a respondent’s income level, religion, socio-economical status, etc., consider if it’s appropriate and relevant to the overall survey and the basis of the evaluation. Also, unless the rest of your survey depends on these answers, consider leaving demographic questions for the end of the survey as they tend to be the more uninteresting parts of a survey for respondents to complete.

Asking too many questions: Tying into the second point, asking too many questions can be the downfall of your survey. There are a variety of question types—open-ended, multiple choice, Likert or interval (very satisfied, satisfied, neutral, dissatisfied, very dissatisfied), ratio (“How many days do you spend studying?”), and dichotomous (true/false, yes/no, agree/disagree)—but it’s more about the intent behind the question. My recommendation is to create a survey that can have up to 15 questions. Keep in mind that engagement levels wane, especially during an online survey where there are more distractions (i.e., social media, videos, online shopping, etc.) (more…)

13 Aug, 2014

Ask Nicole: How Can I Build My Evaluation Skills?

By | 2016-10-25T01:48:01+00:00 August 13th, 2014|Categories: Program Design & Evaluation|Tags: , , , , , , , , , , |0 Comments

image

Do you have a question that other Raise Your Voice community members can benefit from? Contact me and I’ll answer it!

Several weeks ago, I received the following email from a fellow program evaluator:

Hi Nicole,

I read your blog post, “Program Evaluation for Women and Girls of Color: How I Developed My Passion for Evaluation Practice,” and I was immediately drawn to it. I am an up and coming program evaluator who is fairly new to the field and still on a learning curve. I am struggling to figure out my place in the field, whether I belong here, and whether there are growth opportunities for me as an evaluator of color with a social equity, direct service, and light research background. A previous boss once told me that she didn’t believe I loved research, and didn’t see me as being an evaluator. While I agree that research isn’t my forte, there continues to be something that draws me to evaluation. I consider myself to be pragmatic and can get lost in big picture thinking, something researchers are good at. But, I believe in program accountability, neutrality in the presentation of information, and integrity. These are all elements that I believe evaluation brings to the table. I do wish to grow in my career, but at times I feel like giving up because I don’t yet know a lot about many things related to evaluation. Anyway, I’m happy to have come across your blog post because it provided some comfort in knowing that I am not the only one who has questioned her place in program evaluation. Your words are empowering!It would be great to speak with you further about your career trajectory in evaluation.What professional development opportunities would you recommend? How may I build up my evaluation skills? Looking forward to your response.

This was a really thoughtful question, and it’s great to hear from a fellow program evaluator of color!

Program evaluation is a rapidly changing field, and as you see, it’s exciting and daunting at the same time. Like you, I consider myself an up and coming evaluator, and I totally understand the feeling of not know all that one needs to know in order to get ahead in this field. I’ve come to find that, in my experience, you’ll always be on a learning curve because of emerging best practices, the latest research, and current trends. That’s what makes evaluation so exciting.

When I decided to develop a career in program evaluation, I began reading up on anything and everything related to program evaluation. And then I started to get overwhelmed. There’s so much to evaluation that it’s almost impossible to know everything. So, a recommendation I have for is to figure out what you want to develop your niche in, and build your skills in that, if possible. For example, I’m into participatory evaluation, empowerment evaluation, and evaluation theories that can be applied within racial, feminist, gender, and youth lenses. Elements such as logic models, quantitative and qualitative data collection, and the like are the basis for all evaluation theories, and I when I need to figure out how to run an analysis, or if I need additional help in looking for key themes in a qualitative data set, I’ll ask my colleagues. In other words, everything is (in the words of entrepreneur Marie Forleo, “figure-outable”).

While I think developing a niche is ideal, I understand that choosing an area of focus may tricky and dependent on your actual job duties. Are you good at running data sets, spotting the similarities, and comparing different kinds of variables? Do you like to help others run different data software, like SPSS, DataViz, and Excel? Do you like helping others present their data in a way that’s easy to understand and catered to the audience receiving the information? When I need to figure out a better or more interesting way to present my data, I like to turn to Stephanie Evergreen of Evergreen Data. In the blog portion of her website, she gives practice advice for how best to tailor your data presentation to your audience. Stephanie also runs Potent Presentations, which helps evaluators improve their presentation skills. When I need to figure out a better way to show my data in a bar chart or a graph or even participate in a DataViz challenge, I look to Ann Emery of Emery Evaluation. If I want to learn better ways on how to de-clutter my data, I like to read (and be entertained by) Chris Lysy of freshspectrum.  Also, if I want to gain more insights on building an independent evaluation consultant business, I refer to Gail Barrington of Barrington Research Group.

When it comes to professional development and skills building, here are some places to get started:

(more…)