13 Aug, 2014

Ask Nicole: How Can I Build My Evaluation Skills?

By |2021-08-19T18:36:13-04:00August 13th, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

Do you have a question that other Raise Your Voice community members can benefit from? Contact me and I’ll answer it!

Several weeks ago, I received the following email from a fellow program evaluator:

Hi Nicole,

I read your blog post, “Program Evaluation for Women and Girls of Color: How I Developed My Passion for Evaluation Practice,” and I was immediately drawn to it. I am an up and coming program evaluator who is fairly new to the field and still on a learning curve. I am struggling to figure out my place in the field, whether I belong here, and whether there are growth opportunities for me as an evaluator of color with a social equity, direct service, and light research background. A previous boss once told me that she didn’t believe I loved research, and didn’t see me as being an evaluator. While I agree that research isn’t my forte, there continues to be something that draws me to evaluation. I consider myself to be pragmatic and can get lost in big picture thinking, something researchers are good at. But, I believe in program accountability, neutrality in the presentation of information, and integrity. These are all elements that I believe evaluation brings to the table. I do wish to grow in my career, but at times I feel like giving up because I don’t yet know a lot about many things related to evaluation. Anyway, I’m happy to have come across your blog post because it provided some comfort in knowing that I am not the only one who has questioned her place in program evaluation. Your words are empowering!It would be great to speak with you further about your career trajectory in evaluation.What professional development opportunities would you recommend? How may I build up my evaluation skills? Looking forward to your response.

This was a really thoughtful question, and it’s great to hear from a fellow program evaluator of color!

Program evaluation is a rapidly changing field, and as you see, it’s exciting and daunting at the same time. Like you, I consider myself an up and coming evaluator, and I totally understand the feeling of not know all that one needs to know in order to get ahead in this field. I’ve come to find that, in my experience, you’ll always be on a learning curve because of emerging best practices, the latest research, and current trends. That’s what makes evaluation so exciting.

When I decided to develop a career in program evaluation, I began reading up on anything and everything related to program evaluation. And then I started to get overwhelmed. There’s so much to evaluation that it’s almost impossible to know everything. So, a recommendation I have for is to figure out what you want to develop your niche in, and build your skills in that, if possible. For example, I’m into participatory evaluation, empowerment evaluation, and evaluation theories that can be applied within racial, feminist, gender, and youth lenses. Elements such as logic models, quantitative and qualitative data collection, and the like are the basis for all evaluation theories, and I when I need to figure out how to run an analysis, or if I need additional help in looking for key themes in a qualitative data set, I’ll ask my colleagues. In other words, everything is (in the words of entrepreneur Marie Forleo, “figure-outable”).

While I think developing a niche is ideal, I understand that choosing an area of focus may tricky and dependent on your actual job duties. Are you good at running data sets, spotting the similarities, and comparing different kinds of variables? Do you like to help others run different data software, like SPSS, DataViz, and Excel? Do you like helping others present their data in a way that’s easy to understand and catered to the audience receiving the information? When I need to figure out a better or more interesting way to present my data, I like to turn to Stephanie Evergreen of Evergreen Data. In the blog portion of her website, she gives practice advice for how best to tailor your data presentation to your audience. Stephanie also runs Potent Presentations, which helps evaluators improve their presentation skills. When I need to figure out a better way to show my data in a bar chart or a graph or even participate in a DataViz challenge, I look to Ann Emery of Emery Evaluation. If I want to learn better ways on how to de-clutter my data, I like to read (and be entertained by) Chris Lysy of freshspectrum.  Also, if I want to gain more insights on building an independent evaluation consultant business, I refer to Gail Barrington of Barrington Research Group.

When it comes to professional development and skills building, here are some places to get started:

(more…)

30 May, 2014

Program Evaluation for Women & Girls of Color: Do You Need Quantitative or Qualitative Data?

By |2021-08-19T18:20:33-04:00May 30th, 2014|Categories: Research & Evaluation|Tags: |0 Comments

image

This is part four in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

In Part One, I shared how I got started in evaluation practice. In Part Two, I shared some of the common myths about evaluation that can cause us to look at evaluation negatively. In Part Three, I shared how asking the right questions is the key to successfully evaluating your program or service.

Now that you’ve determined why you want to evaluate your program or service, and you’ve decided if you should evaluate as the program or service is in the development stages or at its conclusion (or both!), it’s time to determine what type of data you want to collect.

The Basics

When it comes to research and evaluation, there are two types of data: quantitative data and qualitative data.

With quantitative data, you’re collecting information that can be mathematically analyzed in a numerical fashion. You want to use quantitative data:

*To see what correlations exists between various factors

*To gather demographics (age, gender, race, grade level, etc.)

*To get the answers for “who”, “what”, and “how many” of an occurrence

*To draw a more generalized conclusion about a population

You can collect quantitative data through:

*Pre- and post-tests

*Surveys

*Questionnaires

*Brief telephone interviews or in-person interviews

In comparison, qualitative data is collected when you want to analyze a more narrative form of data that can’t be mathematically analyzed. You want to use qualitative data:

*To get more in-depth explanations between correlated factors

*To gain insights into behaviors and experiences of a population

*To get the answers for “why” and how” something is occurring

*To have a “voice” within a population rather than a generalization

You can collect qualitative data through:

*Observation

*Focus groups

*In-depth interviews (with stakeholders, program participants, or staff members for example)

*Case studies

Here Is an Example of Each

Let’s revisit our example from Part Three:

An evaluation of 11 high schools across the Lubbock, Texas school district looked at peer influence as a key component of delayed onset of sexual activity of the district’s mandated abstinence-based sex education curriculum. Let’s say that we expect the result of our program to be that students are heavily influenced by their peers in whether they are successful at delaying onset of sexual activity (vaginal, anal, and oral sex.)

An example of a quantitative question: Who are the students involved in the program? (Age, race, gender, sexual experience at time of program, etc.)

With this question, you’re collecting data on the participants’ demographics (the WHO) to determine who is participating in your program or service.

An example of a qualitative question: Do you feel that your peers play a role in whether or not you delay sexual activity?

With this question, you’re collecting more of a narrative as to why your participants feel they way that they do (the WHY). While this question can be answered as an open-ended question on a survey, by asking this question within a focus group or an in-depth interview setting, you’re able to get more detailed information.

Which is Best?

Comparing quantitative data and qualitative data is like comparing apples to oranges. They are both useful forms of data, like apples and oranges are delicious types of fruit. How will you know to use which form of data collection? It depends on how you want to answer your evaluation question.

Deciding between quantitative and qualitative methods is largely based on your evaluation questions as well as on the practicality of collecting your data. A clearer way to decide between quantitative and qualitative data collection is to decide how specific or how general you want to be. (more…)

22 May, 2014

Program Evaluation for Women and Girls of Color: Develop The Evaluation Questions You Want To Answer

By |2021-08-19T18:21:00-04:00May 22nd, 2014|Categories: Research & Evaluation|Tags: |0 Comments

image

This is part three in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

So far, we’ve discussed some possible “WHYs” of evaluation practice (from the benefits of evaluating your programs and services, seeing if the objectives of your program or service is currently meeting the needs of your participants, to looking at the misconceptions of evaluation and how they can affect your work). Now, let’s switch gears and focus on WHAT you’re evaluating and WHEN to evaluate. This part of the series is trickier than the others, but I want to touch on the basics so that you have a working knowledge on this important part of evaluation. This is by no means complete list. If you have a question about anything in particular (logic models, strategic plans, etc.) or would like me to give more examples of this week’s topic, please let me know in the comments below and I can follow-up with additional blog posts outside of this series.

What Are You Evaluating?

In order to get to your destination, you need to know where you’re going. In order to do this, we need to develop a strategy that will guide you in how you will look at your data. This will help you determine if your producing the results you’re expecting. This is where evaluation questions come in. An evaluation question helps you look at your data to see if your program or service is producing its intended objectives.

There are two types of evaluation questions: a process evaluation question and a results-focused question. A process question wants to know how the program is functioning. How a program functions depends on a variety of factors, such as the length of the program, the number of participants, the activities being offered in the program, how the participants interpret ad interact to the activities, and so forth. In other words, the who, what, when, and how of the program’s implementation. Process questions are especially useful when you’re in the beginning stages of planning your program; however, they can be asked throughout the program so that you’re always thinking ahead and adjusting your program’s implementation.

A result-focused question, on the other hand, wants to know if the program is accomplishing the results you’re expecting. In other words, how effective is your program, and are your participants benefiting from the program in the way you’ve intended? Results-focused questions typically follow the completion of a program.

Now that we know more about the types of evaluation questions, let’s look at when each question comes into play. (more…)

14 May, 2014

Program Evaluation for Women & Girls of Color: 7 Reasons Why Evaluation is Intimidating

By |2021-08-19T18:21:35-04:00May 14th, 2014|Categories: Research & Evaluation|Tags: |0 Comments

image

This is part two in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

You know that feeling you get when you’re sitting across from your supervisor during your annual job performance review? You think you’re doing a great job, you’re engaging with your co-workers, your projects are completed on time, and you manage your time well. Your supervisor agrees with you and talks glowingly about your performance, but then proceeds to give recommendations on “ways to improve”.

And now you’re uncomfortable. We all believe that we can handle constructive criticism, but who wants to hear how they’re not doing well? And we already know what improvements need to be made! They’re supposed to make us a better worker. Your supervisor gives you this list of things you need to improve on, and tells you that she would like to check in with you to see how you’re doing. You walk out of her office, feeling frustrated. You see what isn’t going well, and are too self conscious to ask how to improve. With “ways to improve” come concerns that if you don’t measure up, you’re reprimanded, you don’t get your raise, you’re demoted, or you’re let go.

Or…you welcome the challenge. You still feel a little uncomfortable, because it’s human nature to want others to see us at our best. But you already knew which areas you needed to work on but weren’t sure how to go about it, and you’re glad that your supervisor is providing you with concrete ways to do so. You ask your supervisor to provide you with more resources, trainings, literature, and other tools that can help you out as well. She even offers to provide you with additional support by checking in on a monthly basis to see where you are in your improvements. You begin to feel more confident, and your quality of work improves.

This is an example in how shifting your mindset can bring about a better outcome. And our mindset is what Part Two is about.

Last week in Part One, I shared what I believe are common concerns that go through the minds of nonprofit, agency, and business staff when it comes to evaluating a program or service.

… You’re tasked to carry out an evaluation and you don’t know where to begin. You lack the staff capacity needed to carry out an evaluation, or you want to build the capacity and are leery of hiring an external evaluator or don’t have the money in your budget to hire an internal evaluation staff member. When the evaluation is finally completed, you’re disappointed because the results you receive aren’t what you were expecting, and now you have to report it to your stakeholders and your funders. You’re trying to meet the expectations of the people you’re serving and also the expectations of your stakeholders and funders, and you feel that you’re at the mercy of an entity that can end your organization’s work, especially if a good portion of your funding comes from a primary source.

It’s a lot to think about, which can make it very easy to approach program evaluation with a “Why do we need to do this again?” mindset.

And just like how you feel as you sit across from your supervisor, how we look at program evaluation determines how successful we’re going to be at monitoring and evaluating our own programs and services, or being successful at working with an external evaluator. (more…)

8 May, 2014

Program Evaluation for Women and Girls of Color: How I Developed My Passion for Evaluation Practice

By |2021-08-19T18:22:04-04:00May 8th, 2014|Categories: Research & Evaluation|Tags: |0 Comments

image

This is part one in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

Outside of being a licensed social worker and an activist, what’s lesser known about me is that I’m a program evaluator. In fact, program design, implementation, monitoring, and evaluation are the focal point of my consulting business, and it’s what I studied primarily in my social work graduate program.

Admittedly, evaluation doesn’t sound as trendy compared to activism or even social work. It just sounds like a bunch of data collection and analysis, meetings with staff and stakeholders, presenting evaluation findings, and writing reports. Tedious and boring stuff that not many people pay attention to. (These are also key components of evaluation practice, and I’ll speak more about them in next week’s post.)

I’ve also noticed that when I talk about aspects of my work, I lightly touch on evaluation because most audiences I’ve spoken to have been more interested in the social work or activist side of me. So, to start off this series, I wanted to share with you how I got started in evaluation practice, what I enjoy most about it, how having a sound evaluation practice can lead to more funding and community support for your programs and services, and how it’s the glue that holds together my love for social work, activism, and working with women and girls of color. With this series, I’m giving program evaluation the spotlight it deserves. (more…)

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
Go to Top