Better Evaluation is an international collaboration to improve evaluation practice and theory by sharing information about options (methods or tools) and approaches. They have just published the reflective account of the AfricaAdapt evaluation that we carried out with CommsConsult and John Rowley in 2010, the product of a virtual write-shop process led by Irene Gujjit that we wrote about last year. It's probably fair to say the evaluation was a qualified success for all of us. As we describe in the paper, there were tensions and misunderstandings throughout the process which made it more difficult than it could have been and none of us finished the process with the beaming smile of a job very well done.
I am sure that is common for many, many evaluations but not many take the time to reflect, write - and re-write - an account of the process as a learning exercise and then share the result, including our continuing differences of opinion. The key issue for all of us was achieving an effective balance between evaluating and enabling the team to learn, something that is also a common feature of many evaluations. Reviewing the article a year after we completed the process made me think again about our experience earlier this year of trying to integrate triple-loop learning into the IDS climate change knowledge exchange.
It's interesting to monitor what is going on in my own thinking and learning as I reflect on something which happened three years ago. While there is a lot about how to do evaluations better - second loop - I'm taken more to thinking about the role of evaluation in learning and on the role of failure in development. I am informed partly by an interesting thread in KM4Dev, started by the fizzing ideas-bomb that is Nancy White, who linked us to a fascinating conversation on the USAID Learning lab site about linking failure and learning . That links in my mind to a great Duncan Green book review - great in the sense of thorough enough to feel I can comment on the the book without having opened it - about The Limits of Institutional Reform in Development, by Matt Andrews. After a great deal of analysis of failures Andrews proposes an approach he calls, "Problem-Driven Iterative Adaptation (PDIA). That's an idea which overlaps significantly in my mind with all the thinking, talking and work we did on emergence in and around the IKMemergent project.
Our investigation into AfricaAdapt suggested that the project was indeed emergent, experimenting and learning. And there were plenty of examples where it had an impact. But it was a young project, and, unsurprisingly, not achieving all of the lofty objectives spelt out in early concept documents. But when we began talking about those positive and negative findings we seemed to stop communicating with each other. Reflecting on the process, from a safe distance, it's clear that we didn't explore in any depth with the 'subjects' of the evaluation, let alone the commissioners, their assumptions about success and learning. For example, I've always loved Becket's, "Ever tried. Ever failed. No matter. Try again. Fail again. Fail better". It was clearly part of my starting assumptions about the nature of learning and project management, derived in part from my own experience of failure. Without that sharing of assumptions about the process of evaluation in the context of Development - our own Theory of Change, perhaps - it's not surprising we ran into sand. Yet how practical is that kind of exploration in times of limited budgets and busy people, and if a team proposed starting with that kind of open, honest exploration of assumptions in an evaluation I was commissioning I am not sure I would give them the contract! Which is why, coming full circle, it's so interesting and useful to be engaging with the Better Evaluation project as Aid and Development programmes come under increasing pressures.
Thanks to John Rowley for this essential learning |
It's interesting to monitor what is going on in my own thinking and learning as I reflect on something which happened three years ago. While there is a lot about how to do evaluations better - second loop - I'm taken more to thinking about the role of evaluation in learning and on the role of failure in development. I am informed partly by an interesting thread in KM4Dev, started by the fizzing ideas-bomb that is Nancy White, who linked us to a fascinating conversation on the USAID Learning lab site about linking failure and learning . That links in my mind to a great Duncan Green book review - great in the sense of thorough enough to feel I can comment on the the book without having opened it - about The Limits of Institutional Reform in Development, by Matt Andrews. After a great deal of analysis of failures Andrews proposes an approach he calls, "Problem-Driven Iterative Adaptation (PDIA). That's an idea which overlaps significantly in my mind with all the thinking, talking and work we did on emergence in and around the IKMemergent project.
Our investigation into AfricaAdapt suggested that the project was indeed emergent, experimenting and learning. And there were plenty of examples where it had an impact. But it was a young project, and, unsurprisingly, not achieving all of the lofty objectives spelt out in early concept documents. But when we began talking about those positive and negative findings we seemed to stop communicating with each other. Reflecting on the process, from a safe distance, it's clear that we didn't explore in any depth with the 'subjects' of the evaluation, let alone the commissioners, their assumptions about success and learning. For example, I've always loved Becket's, "Ever tried. Ever failed. No matter. Try again. Fail again. Fail better". It was clearly part of my starting assumptions about the nature of learning and project management, derived in part from my own experience of failure. Without that sharing of assumptions about the process of evaluation in the context of Development - our own Theory of Change, perhaps - it's not surprising we ran into sand. Yet how practical is that kind of exploration in times of limited budgets and busy people, and if a team proposed starting with that kind of open, honest exploration of assumptions in an evaluation I was commissioning I am not sure I would give them the contract! Which is why, coming full circle, it's so interesting and useful to be engaging with the Better Evaluation project as Aid and Development programmes come under increasing pressures.