Is your service working to support people to make changes in their beliefs, attitudes or behaviour? Do you want to measure these changes? It sounds tricky, but you can, and one of the most reliable ways you can do it is by using survey with ratings scales.
Everyone is familiar with ratings scales. Lots of websites and marketing researchers use them. There are a few different kinds, the simplest look like this.
How much do you enjoy reading How Tos?
Not at all
A great deal
There is a science to these scales (sometimes called Likert Scales) and there are steps you need to work through to create one. Here's a guide.
Decide what question you want to ask
This sounds obvious but it’s a really important step. What is it that you want to evaluate or measure? How are you going to ask the question?
Create your scale
Once you know what you want to ask you are ready to create your scale. There are lots of examples on the internet but here are some guidelines on how to judge whether the one you are copying or creating is a good one.
Keep the scale to 5 or 7 options. You will see scales that go up to 10 but these can overwhelm people with options.
Make sure your scale is balanced. Your scale needs to have an equal number of options on either side of a neutral option (a neutral option is something like ‘not sure’, ‘maybe’, 'no opinion').
Label your options. Numbers look nice and clear but some people struggle with giving them meaning. People also give very different meanings to a number (is 5 out of 10 a glass half empty or a glass half full?) and researchers say that when presented with numbers people will avoid choosing the extreme ends. It’s best to give each of your options a word label (eg, bad, ok, good). This helps people to choose what best reflects their opinion. They will also know exactly what meaning you are attaching to their answer.
Try to make your scale about one idea. Where possible make your scale about one idea so that it moves through a range of options about that idea (e.g. extremely something à not at all something). Scales with two ideas ask if something is extremely one thing or extremely another. People can find this confusing because they might give different meanings to the two ideas (for example, a scale asking about the strength of something and using two ideas might go from very robust to very insipid. Robust means strong and insipid means not strong but put together these words can be confusing.)
Make your scale evenly spaced. The response options should seem like even steps, one to the next. For example a scale which goes
Excellent really good good poor
might confuse people because excellent, really good and good are all very positive but poor is very far from positive.
A better scale would be
Excellent good fair poor very poor
Test your question and your scale
Next, make sure you test out the question and the scale before you start using it more widely. What sounds clear to you could be confusing to other people. Use the feedback you get to revise the scale.
Collect and analyse the information
You can use ratings scales in surveys. Let’s imagine you’ve got your clients or sector colleagues to answer your question using your ratings scale. Now you can analyse that information. Here’s how you do it.
You turn your answers into numbers. Work out the numbers of people responding to each option and what percentage that is of the responses.
But often you will find that people skip your question. What do you do? Well, imagine you have asked 14 people attending your support group if they felt the program had helped them increase their confidence. They were given the scale 'a lot, a little, neutral, not much, not at all'. Six said ‘very much’, six said ‘a little’ and two didn’t answer the question. You have twelve ‘valid responses’ out of your fourteen. If this happens, there are two ways you can analyse your information.
- You can ignore everyone who didn’t answer. This means that you have 12 ‘valid responses’ to analyse. Six people chose ‘a lot’ (50%) and six chose ‘a little’ (50%).
There is one other way you can do it, but it is less common.
- You can count the people who didn’t respond. If you do that you have 14 answers to analyse. Six out of fourteen people chose ‘a lot’ (43%), six people chose ‘a little’ (43%) and 2 were ‘didn’t respond’ (14%).
Just make sure you explain in your report how you worked out your figures.
You can also work out the average response.
You can do this by giving each of your answers a ‘weight’, or a numerical value. For example:
not at all
If six out of 12 people chose ‘a lot’ (6 x 2) and six out of 12 people chose ‘a little’ (6 x 1) you add those totals then divide by 12 (the number of respondents). The average response is 1.5 on a scale of plus 2 to minus 2.
Share your information
The information you have gathered is an important story to tell clients, colleagues and funders. You need to tell it in a way that makes sense and which helps inform their decision making. How do you do that? See our How to present quantitative information in your reports.