The top 3 criteria for good indicators

Top Three Criteria for Good Indicators 

If you have not done so already, please have a look at the Knowledge section: Indicators: Basic Concepts, before you read further as this article builds on the information given there.

You will find many criteria for good indicators on the internet; some more comprehensible than others.  We have studied them all and narrowed them down to the following three key criteria for good indicators. After reading this you can test the strength of your indicators on the ‘Test the strength of your indicator tool’ which you can access in the toolbox section for this step. 

Criteria

Meaning

Example

Valid

An indicator is valid when it accurately indicates the phenomena that you are trying to measure. It should be sensitive enough to pick up changes in the phenomena over time.

Life satisfaction has been found to be a valid indicator of happiness.  Although high self-esteem is often associated with happy people, it has been found that high self-esteem in itself is not a valid indicator of happiness. The reason for this is that people who esteem themselves positively can experience periods of unhappiness.  

Reliable

The indicator will accurately indicate the phenomena if measured by different people in different places over time. It should be written clearly, using language that will be interpreted in the same way by anybody collecting information in any circumstance. The indicator is more likely to be reliable if it is very specific about what is to be measured.  

An organisation was collecting information about schools participating in their face-to-face interaction programme.  Their indicator was 'number of schools registered''. (It is worth noting this could have been more specific, such as number of schools registered for x).  Over time, besides the regular face-to-face programme, they also started to offer some events-based activities in which schools could participate. Some of their staff, who wanted to increase their performance on targets, decided to apply a broad interpretation of the indicator and so registered schools for participation in these ad hoc events.  After a while, it was discovered that the indicator was no longer a reliable tool to assess school participation in the organisation's regular face-to-face programme. The indicator was re-written, but as a result an unknown chunk of the school participation data became invalid as it could not be used as comparative data any more.

Simple and Affordable

Every indicator involves an exercise in data collection that will involve your staff or programme participants. Data collection for the indicator should not be too burdensome on people and the amount of resources (funds, personnel, time) required should be reasonable.  The more burdensome data collection is, the more likely it is that the quality of the data collected will be sub-standard. This is also a risk if the data collection process is too complicated. You need to be realistic about the time that is involved in collecting and submitting the data on the indicator.  Incomplete data that is not submitted in time will affect your own and others’ understanding of your progress. Be aware that providing incentives to motivate people to collect data might lead to biased or false reporting.    

Community clinics under the Department of Health (DoH) collect service utilisation data for certain predetermined age categories (such as 0-1 years; 2-6 years).  A programme focused on the reproductive health of youth of a particular age category, which happened to fall across two of the DoH categories, wanted the clinic staff to recalculate the utilistation statistics each month in order to provide information for their indicator – ‘utilisation of reproductive health services’ for the age group relevant to them.  This meant the clinic staff had to adjust their basic data collection technique and keep an additional template to capture and calculate this data. This was an increased workload and became a burden.  This process was not institutionalised and so clinic staff would often forget to do it or do it incorrectly, often when staff rotated.  In the end, the data collection required to measure this indicator was not realistic and had disappointing results. Additionally, clinic staff and evenutally the DoH found the programme irritating.