3 Dec, 2014

Who Are The People Behind The Numbers?

By |2021-08-19T18:41:54-04:00December 3rd, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

(Photo credit: Kaiser Family Foundation)

“Statistics are real people with the tears wiped away. When statistical data are presented, they seem sanitized and tend to distance the reader from the actual problem at hand.”  ~ Dr. B. Lee Green 

Let’s take a look at this graph, taken from the policy fact sheet “Sexual Health of Adolescents and Young Adults in the United States”, developed by the Kaiser Family Foundation.

This fact sheet provides key data on sexual activity, contraceptive use, pregnancy, prevalence of sexually transmitted infections (STIs), and access to reproductive health services among teenagers and young adults in the United States.

The chart above is taken from this fact sheet, and the data and information is listed in the 2013 Kaiser Women’s Health Survey. To list some statistics:

**70% of women 19 to 24 rated confidentiality about use of health care such as family planning or mental health services as “important”; however, the majority of girls and women were not aware that insurers may send an explanation of benefits (EOB) that documents use of medical services that have been used to the principal policy holder, who may be a parent.

**Today, 21 states and DC have policies that explicitly allow minors to consent to contraceptive services, 25 allow consent in certain circumstances, and 4 have no explicit policy;

**38 states require some level of parental involvement in a minor’s decision to have an abortion, up from 18 states in 1991. 21 states require that teens obtain parental consent for the procedure, 12 require parental notification, and 5 require both.

Of course, the correlation makes sense: the older a woman is, the higher likelihood she is aware of what a EOB is and how health insurance companies many send them by mail to her home. In fact:

One of the earliest [Affordable care Act] provisions that took effect in September 2010 was the extensions of dependent coverage to young people up to age 26, who had the highest uninsured rate of any age group at the time the law was passed. In 2013, over four in ten (45%) women ages 18 to 25 reported that they were covered on a parent’s plan as a dependent. because that are adult children, the extension of coverage has raised concerns about their ability to maintain privacy regarding the use of sensitive health services such as reproductive and sexual health care and mental health. (Kaiser Family Foundation, 2013)

I also find it interesting that the younger a woman is, the higher she is to rate confidentiality when seeking various health care services. Also the fact that only 21 states and DC allow minors complete consent to access contraceptives and that most states require some level of parental involvement in a young person’s decision to have an abortion is worth looking into, especially in states that allow young people to access contraception without parental consent.

But we’re not here to talk about completely about the statistics. And we’re not here to provide a full-on critique of policy fact sheet.

(more…)

1 Oct, 2014

10 Common Mistakes that Keep Respondents from Completing Your Survey

By |2021-08-19T18:39:34-04:00October 1st, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

Developing survey questions is harder than it looks. Asking questions is easy, but asking direct, unbiased, and valid questions is more of an art form. There’s a lot that goes into it, including flow, how the questions tie into what your program evaluation wants to answer, and keeping your respondents engage enough to complete the survey.

Here are 10 common mistakes and my tips for avoiding them:

Not knowing your target audience: Understanding who your audience is can help you craft survey questions that are pertinent to them. Avoid using words or phrases that your respondents may not know the meaning of. Instead, use words and phrases that are tailored to your target audience. Are your surveying nurses, social workers, or teachers? It’s ok to use words or phrases that are most common to those target audiences. On the other hand, if you’re not sure if your audience will understand what you mean by “reproductive justice”, it’s best to gather insights from the program coordinator or workshop facilitator to see if this term has been discussed.

Not explaining WHY: Believe it or not, most respondents are willing to help you if you share the value in completing your survey. When a respondent knows what’s in it for them, there is likelihood that the survey gets completed. If respondents know that their responses can aid in determining pay raises or in the restructuring of an under-performing program’s activities you’re more likely to complete it. If an incentive (i.e. a gift card to the respondent’s favorite retail store, coffee shop, or to wherever Visa is accepted) is included when a respondent completes your survey, indicate that on your survey at the very beginning before respondents begin.

Including extensive demographic questions: When you ask too many demographic questions, it can result in taking up a lot of room that could have been used for other questions. Before you add in questions to gather information on a respondent’s income level, religion, socio-economical status, etc., consider if it’s appropriate and relevant to the overall survey and the basis of the evaluation. Also, unless the rest of your survey depends on these answers, consider leaving demographic questions for the end of the survey as they tend to be the more uninteresting parts of a survey for respondents to complete.

Asking too many questions: Tying into the second point, asking too many questions can be the downfall of your survey. There are a variety of question types—open-ended, multiple choice, Likert or interval (very satisfied, satisfied, neutral, dissatisfied, very dissatisfied), ratio (“How many days do you spend studying?”), and dichotomous (true/false, yes/no, agree/disagree)—but it’s more about the intent behind the question. My recommendation is to create a survey that can have up to 15 questions. Keep in mind that engagement levels wane, especially during an online survey where there are more distractions (i.e., social media, videos, online shopping, etc.) (more…)

13 Aug, 2014

Ask Nicole: How Can I Build My Evaluation Skills?

By |2021-08-19T18:36:13-04:00August 13th, 2014|Categories: Research & Evaluation|Tags: , |0 Comments

image

Do you have a question that other Raise Your Voice community members can benefit from? Contact me and I’ll answer it!

Several weeks ago, I received the following email from a fellow program evaluator:

Hi Nicole,

I read your blog post, “Program Evaluation for Women and Girls of Color: How I Developed My Passion for Evaluation Practice,” and I was immediately drawn to it. I am an up and coming program evaluator who is fairly new to the field and still on a learning curve. I am struggling to figure out my place in the field, whether I belong here, and whether there are growth opportunities for me as an evaluator of color with a social equity, direct service, and light research background. A previous boss once told me that she didn’t believe I loved research, and didn’t see me as being an evaluator. While I agree that research isn’t my forte, there continues to be something that draws me to evaluation. I consider myself to be pragmatic and can get lost in big picture thinking, something researchers are good at. But, I believe in program accountability, neutrality in the presentation of information, and integrity. These are all elements that I believe evaluation brings to the table. I do wish to grow in my career, but at times I feel like giving up because I don’t yet know a lot about many things related to evaluation. Anyway, I’m happy to have come across your blog post because it provided some comfort in knowing that I am not the only one who has questioned her place in program evaluation. Your words are empowering!It would be great to speak with you further about your career trajectory in evaluation.What professional development opportunities would you recommend? How may I build up my evaluation skills? Looking forward to your response.

This was a really thoughtful question, and it’s great to hear from a fellow program evaluator of color!

Program evaluation is a rapidly changing field, and as you see, it’s exciting and daunting at the same time. Like you, I consider myself an up and coming evaluator, and I totally understand the feeling of not know all that one needs to know in order to get ahead in this field. I’ve come to find that, in my experience, you’ll always be on a learning curve because of emerging best practices, the latest research, and current trends. That’s what makes evaluation so exciting.

When I decided to develop a career in program evaluation, I began reading up on anything and everything related to program evaluation. And then I started to get overwhelmed. There’s so much to evaluation that it’s almost impossible to know everything. So, a recommendation I have for is to figure out what you want to develop your niche in, and build your skills in that, if possible. For example, I’m into participatory evaluation, empowerment evaluation, and evaluation theories that can be applied within racial, feminist, gender, and youth lenses. Elements such as logic models, quantitative and qualitative data collection, and the like are the basis for all evaluation theories, and I when I need to figure out how to run an analysis, or if I need additional help in looking for key themes in a qualitative data set, I’ll ask my colleagues. In other words, everything is (in the words of entrepreneur Marie Forleo, “figure-outable”).

While I think developing a niche is ideal, I understand that choosing an area of focus may tricky and dependent on your actual job duties. Are you good at running data sets, spotting the similarities, and comparing different kinds of variables? Do you like to help others run different data software, like SPSS, DataViz, and Excel? Do you like helping others present their data in a way that’s easy to understand and catered to the audience receiving the information? When I need to figure out a better or more interesting way to present my data, I like to turn to Stephanie Evergreen of Evergreen Data. In the blog portion of her website, she gives practice advice for how best to tailor your data presentation to your audience. Stephanie also runs Potent Presentations, which helps evaluators improve their presentation skills. When I need to figure out a better way to show my data in a bar chart or a graph or even participate in a DataViz challenge, I look to Ann Emery of Emery Evaluation. If I want to learn better ways on how to de-clutter my data, I like to read (and be entertained by) Chris Lysy of freshspectrum.  Also, if I want to gain more insights on building an independent evaluation consultant business, I refer to Gail Barrington of Barrington Research Group.

When it comes to professional development and skills building, here are some places to get started:

(more…)

30 May, 2014

Program Evaluation for Women & Girls of Color: Do You Need Quantitative or Qualitative Data?

By |2021-08-19T18:20:33-04:00May 30th, 2014|Categories: Research & Evaluation|Tags: |0 Comments

image

This is part four in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

In Part One, I shared how I got started in evaluation practice. In Part Two, I shared some of the common myths about evaluation that can cause us to look at evaluation negatively. In Part Three, I shared how asking the right questions is the key to successfully evaluating your program or service.

Now that you’ve determined why you want to evaluate your program or service, and you’ve decided if you should evaluate as the program or service is in the development stages or at its conclusion (or both!), it’s time to determine what type of data you want to collect.

The Basics

When it comes to research and evaluation, there are two types of data: quantitative data and qualitative data.

With quantitative data, you’re collecting information that can be mathematically analyzed in a numerical fashion. You want to use quantitative data:

*To see what correlations exists between various factors

*To gather demographics (age, gender, race, grade level, etc.)

*To get the answers for “who”, “what”, and “how many” of an occurrence

*To draw a more generalized conclusion about a population

You can collect quantitative data through:

*Pre- and post-tests

*Surveys

*Questionnaires

*Brief telephone interviews or in-person interviews

In comparison, qualitative data is collected when you want to analyze a more narrative form of data that can’t be mathematically analyzed. You want to use qualitative data:

*To get more in-depth explanations between correlated factors

*To gain insights into behaviors and experiences of a population

*To get the answers for “why” and how” something is occurring

*To have a “voice” within a population rather than a generalization

You can collect qualitative data through:

*Observation

*Focus groups

*In-depth interviews (with stakeholders, program participants, or staff members for example)

*Case studies

Here Is an Example of Each

Let’s revisit our example from Part Three:

An evaluation of 11 high schools across the Lubbock, Texas school district looked at peer influence as a key component of delayed onset of sexual activity of the district’s mandated abstinence-based sex education curriculum. Let’s say that we expect the result of our program to be that students are heavily influenced by their peers in whether they are successful at delaying onset of sexual activity (vaginal, anal, and oral sex.)

An example of a quantitative question: Who are the students involved in the program? (Age, race, gender, sexual experience at time of program, etc.)

With this question, you’re collecting data on the participants’ demographics (the WHO) to determine who is participating in your program or service.

An example of a qualitative question: Do you feel that your peers play a role in whether or not you delay sexual activity?

With this question, you’re collecting more of a narrative as to why your participants feel they way that they do (the WHY). While this question can be answered as an open-ended question on a survey, by asking this question within a focus group or an in-depth interview setting, you’re able to get more detailed information.

Which is Best?

Comparing quantitative data and qualitative data is like comparing apples to oranges. They are both useful forms of data, like apples and oranges are delicious types of fruit. How will you know to use which form of data collection? It depends on how you want to answer your evaluation question.

Deciding between quantitative and qualitative methods is largely based on your evaluation questions as well as on the practicality of collecting your data. A clearer way to decide between quantitative and qualitative data collection is to decide how specific or how general you want to be. (more…)

22 May, 2014

Program Evaluation for Women and Girls of Color: Develop The Evaluation Questions You Want To Answer

By |2021-08-19T18:21:00-04:00May 22nd, 2014|Categories: Research & Evaluation|Tags: |0 Comments

image

This is part three in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

So far, we’ve discussed some possible “WHYs” of evaluation practice (from the benefits of evaluating your programs and services, seeing if the objectives of your program or service is currently meeting the needs of your participants, to looking at the misconceptions of evaluation and how they can affect your work). Now, let’s switch gears and focus on WHAT you’re evaluating and WHEN to evaluate. This part of the series is trickier than the others, but I want to touch on the basics so that you have a working knowledge on this important part of evaluation. This is by no means complete list. If you have a question about anything in particular (logic models, strategic plans, etc.) or would like me to give more examples of this week’s topic, please let me know in the comments below and I can follow-up with additional blog posts outside of this series.

What Are You Evaluating?

In order to get to your destination, you need to know where you’re going. In order to do this, we need to develop a strategy that will guide you in how you will look at your data. This will help you determine if your producing the results you’re expecting. This is where evaluation questions come in. An evaluation question helps you look at your data to see if your program or service is producing its intended objectives.

There are two types of evaluation questions: a process evaluation question and a results-focused question. A process question wants to know how the program is functioning. How a program functions depends on a variety of factors, such as the length of the program, the number of participants, the activities being offered in the program, how the participants interpret ad interact to the activities, and so forth. In other words, the who, what, when, and how of the program’s implementation. Process questions are especially useful when you’re in the beginning stages of planning your program; however, they can be asked throughout the program so that you’re always thinking ahead and adjusting your program’s implementation.

A result-focused question, on the other hand, wants to know if the program is accomplishing the results you’re expecting. In other words, how effective is your program, and are your participants benefiting from the program in the way you’ve intended? Results-focused questions typically follow the completion of a program.

Now that we know more about the types of evaluation questions, let’s look at when each question comes into play. (more…)

This Is A Custom Widget

This Sliding Bar can be switched on or off in theme options, and can take any widget you throw at it or even fill it with your custom HTML Code. Its perfect for grabbing the attention of your viewers. Choose between 1, 2, 3 or 4 columns, set the background color, widget divider color, activate transparency, a top border or fully disable it on desktop and mobile.
Go to Top