EVALUATIVE THINKING - BEYOND MONITORING & EVALUATION

The concern to demonstrate ‘results’ or ‘impact’ has provided a broad sectoral incentive to invest more in ‘design, monitoring and evaluation’. This has led to a proliferation of manuals for programme staff and practitioners and a fair specialisation of ‘evaluation’, with courses, communities of practice, professional associations and even a few universities offering advanced degrees in ‘evaluation’.

Several operational or funding agencies have also sought to develop more in-house ‘evaluation capacity’. One ‘outcome’ are more demanding and sophisticated Terms of Reference, though often not matched by budgets that can ensure the time required to meet those expectations. On several occasions I have found myself asked to ‘evaluate’ programmes with multiple strands, in volatile environments, carried out by various collaborating agencies, over a span of three years, and with little solid monitoring data available - while the budget was only enough to allow me a good week on the ground. In addition to the requested retrospective inquiry, the commissioning agency may also ask me to look forward and advice on the strategic development for the programme for the next three years or so. That however implies additional time investment in scenario-thinking, assessing what others are doing in the same area or around the same theme, and thinking through ‘options'. But evaluators are not magicians who can inquiry into everything, robustly, within very short time frames. 

Relevant and valuable as the investment in M&E is, we are missing however a critical element: stronger evaluative thinking among programme staff.

With apologies where generalisation is unfair, my experience is that many programme staff do not themselves regularly ask the important evaluative questions during ‘implementation’. Only at the time of a formal evaluation do these get central attention. Why is that a problem?

We know that programming to catalyse positive social and political change in a society in general, and even more so in volatile situations, will never be a fairly straightforward linear process, and the simple ‘implementation’ of a ‘plan’. Dynamic steering and adaptations will almost certainly be required.

That means we need quicker feedback and learning loops. We can’t wait for more comprehensive formal evaluations further down the line. Even if during the planning phase we have identified our indicators thoughtfully, we can only collect ‘data’ regularly for so many. So we need to be careful that our chosen indicators do not unintentionally lead to ‘tunnel vision’ i.e. prevent us from scanning the wider environment with an open and inquiring mind, attentive to issues that may be important but that we haven’t thought about beforehand.

Most of my professional work has been in settings with a high degree of ‘complexity’ – in the sense of David Snowden’s Cynefin framework. A core characteristic of ‘complex’ situations is that the relationship between cause and effect can only be perceived afterwards – with often a range of contributing factors rather than a clear single cause. We can’t rely on ‘best practice’ and only to a limited degree on ‘good practices’ – at best we know what definitely would be ‘bad practice’. In those situations, the required attitude, says Snowden, is one of ‘probing-sensing-responding’. ‘Probing’ signals that we are trying and testing, and very actively observing what seems to work and what not, and why – and what unintended consequences our actions may have. Practice here will be ‘emerging’. ‘Theories of change’ would really be better referred to as ‘hypotheses of change’, signalling that we regularly need to test the underlying assumptions against our real world experience.

Indicators are relevant and useful, but they are not enough. We need more than a solid pre-defined ‘monitoring system’. We need programme people with an inquisitive mind set, that regularly ask probing questions. I tend to refer to this as ‘reflective practice’, but it can also be called ‘learning-in-practice’ or ‘evaluative thinking’. Monitoring data in any case still beg interpretation – which again requires asking the right questions and probing the possible answers.

What inhibits us from such ‘evaluative thinking’? Here are at least four contributing factors:

  • Project thinking’: Much developmental, governance, human rights and peace work seeks to catalyse positive social, political and economic change in a society. Yet the way aid is administered has led many of us to see ourselves as ‘project implementers/administrators’ rather than 'catalysts of change'. This is all the more problematic as the underlying paradigm of ‘project’ thinking is that of an engineering challenge: Even though high levels of expertise may be needed, the change process can be controlled. Most of the time we work with check lists, not question sheets.
  • The ‘project cycle’: In project cycle visualisations, ‘evaluation’ comes at the end of the cycle. Please create project cycles with many mid-term reviews or real-time evaluations!
  • The professionalization of the evaluation field: Just as ‘gender’ becomes the responsibility of the ‘gender specialist’, and ‘security’ that of the ‘security officer’, ‘evaluation’ is no longer our responsibility but that of the ‘evaluation specialist’;
  • Solution orientation: Last but not least, the prevailing mind set is that aid-supported programming and ‘technical assistance’ offers ‘solutions’. Aid workers are not trained in the art of asking powerful, catalytic questions that not only enhance the quality of their own inquiry but of a collective inquiry by key stakeholders. If we have too high a level of confidence that our planned action will indeed bring ‘solutions’, there is little felt need to regularly ‘probe and test’ as we go along. We may be gathering great data about our chosen indicators, but don’t go back to the questions that underpinned the choice of those indicators or other key questions that were never translated into a ‘smart’ indicator.

Imagine that we would carry out our actions or programmes with an ‘evaluative' mind-set, what would we expect to see (indicators indicators!). Well, for example:

  • Teams that collectively generate the ‘question sheet’ with the most important questions related to different aspects of their work, the relationships, the effects and impacts, their positioning and role within a changing environment etc. And then regularly have conversations about them;
  • Investment of time, effort and attention in regular, structured reviews by the programme team, asking the deeper and wider questions also beyond the ‘monitoring data’, rather than counting primarily on a formal evaluation towards the end of a project cycle to do this;
  • Learning and probing ‘journals’, that complement the monitoring data, and document the reflective inquiry process, and the adaptations that were made (or not) as a result of it;
  • A feeling of confidence in the programme team, ahead of a formal, external, evaluation, as they own and have been working with the critical evaluative questions themselves;
  • No more situations wherein an evaluator must answer all key questions with evidence gathered through sound methodologies, within an unrealistically short time spans.  By and large the evaluator can focus on verifying the critical reflection of the programme team, and concentrate on some important questions or areas of inquiry that the team may not have covered so well.

Would I as a donor be comfortable with this? Yes. When it comes to trying to catalyse positive change in complex and volatile environments, I have more faith in a programme team that demonstrates competence as a reflective and smart navigator in environments where much is unpredictable, than in one that goes more or less on ‘automatic pilot’. Even if their ‘flight recorder’ collects many pre-determined monitoring data.