Case Study: Indicators, Targets and Measures for New Beginnings Development Centre

Case Study: Indicators, Targets and Measures for New Beginnings Development Centre

Selecting indicators, setting targets and figuring out how to measure and find/collect the required information is the essence of your evaluation plan. It takes alot of careful thought but we recommend spending this time. Before reading further, it will be helpful to have a quick look at this example.

We started by identifying the easiest indicators (the number of registrations and graduates).  When we got stuck on the more difficult ones, like measuring improvement in life quality, we moved on to identify targets and means of verification for the easiest indicators. In this way, we felt like we were continually making some progress and chose to tackle the more difficult ones later. By then we were very motivated because it was clear that the rest of the plan had come together nicely.  

Figuring out how to measure difficult indicators:

Being positive and motivated helped, but it did not make the process easier; we first needed to understand what quality of life means and which aspects of it the New Beginnings course could realistically have impacted.  For that we had to do an internet search, review and choose from some of the existing quality of life indicators.  We decided on a pre- and post-intervention questionnaire to check for self-reported changes in students’ quality of life.  Because it is an entry requirement for the programme that students have no source of income, we felt that if they then:

  • Get employed or create their own employment (especially in the areas that they received technical training in) after they graduate - and there is a subsequent increase in income level;
  • And if the majority of them manage to retain this employment/income for a reasonable period of time;

this evidence is concrete enough or already sufficient to conclude that the New Beginnings programme is effective.  We were therefore not sure that quasi-experimental research (comparing groups that participated and those that did not) would be necessary, but at this stage we decided to consult with a seasoned evaluator.    

She confirmed that a control group comparison would probably not be necessary, but she also warned us about the pre-post intervention questionnaire:  Quality of life is a subjective concept and people’s perceptions in this regard are dependent on the standards (of living) that they are used to. When our life quality improves we also increase our standards since we are always aspiring to have/be more.  It might happen that a student rated himself a 5 on a quality of life scale ranging from 1 (bad) to 10 (good) in the pre-test.  After some time and improvements in his life quality as a result of the programme his standards have also increased and judged against the higher standards (and not against what he said in the pre-test questionnaire) he rates himself a 5 again.  

To avoid that this bias influence the outcome of our evaluation she recommended that we rather administer a pre-post questionnaire after their participation in the programme where we ask graduates to rate their lives before and after participation in the programme retrospectively to ensure a fair comparison.  A version of the questionnaire (that is not retrospective) can still be included as part of the registration for the programme and once New Beginnings have a reasonably large sample[1], they can do a comparison of the ratings of newly registered students that have not yet participated in the programme (as a control group) and the retrospective ratings of programme alumni.      

What about the evaluating the programme theory?

The above analysis of the improvement in quality of life will be very interesting, but the evaluation might never get to that point. 

If New Beginnings notice (through their monitoring system) that their students are consistently not getting employed (or losing their employment) despite of their graduation from the programme, they will need to investigate.  This will entail the formulation of new evaluation questions which might include the following:

  • Is their training in line with industry requirements and is the standard of the training sufficiently high enough to allow their students to compete in the job market?
  • Are the students that they are attracting capable of successfully participating in the job market in the skill areas being developed? (i.e. are their assumptions about the motivation and enabling environment  of students holding?)
  • One of the assumptions of the programme that is not within the control of the programme is that there will be employment opportunities available.  To what extent is this assumption not holding? In other words, is it reasonable for their students to be experiencing these difficulties, given the broader economic situation?
  • Is the job placement service that they are offering students functioning well? 

The usefulness of having done the M&E plan

It has been a long process (since Step 1_Plan) to get to this point where we know where we are going in terms of the monitoring and evaluation of the programme.  Although this plan might be adjusted as we implement our programme, what is especially useful is that we now know that in order to effectively monitor and evaluate their programme New Beginnings has to:

  • Implement a student administration system which is able to accurately report on the registration, retention and ultimate graduation of students;
  • Administer a written and practical exam to test the knowledge and skills of their students as part of the graduation process;
  • Implement an alumni programme which features an information collection system. Through this system New Beginnings should be able to keep track of their alumni’s status on key indicators (have they found employment; are they still employed or not after certain periods of time etc.);
  • Collect information on key indicators (such as income level and ratings on certain life quality indicators) before the students start participating in the programme (i.e. upon registration with the programme)
  • Collect information on key indicators after the students have graduated from the programme and have had a source of income for some time (which would again necessitate the development of an alumni programme).

All of the above should be quite doable for the programme staff themselves, although they might need some assistance with the analysis of the before and after data, especially if they decide to do the control group comparison.  They could contact the University of Cape Town/ or the Western Cape (Sociology Departments) for assistance with this analysis.

[1] Read more about sampling in the Evaluation section of the 'Growing Confidence' website.