Six weeks ago, I wrote about how evaluation is more than just assessment; it’s a thoughtful, multi-layered process grounded in curiosity, communication, and care. At the time, I had just entered the LDT 506 course and was eyeing the upcoming evaluation project. I was full of confidence, freshly caffeinated, and armed with the AEA’s evaluator competencies (American Evaluation Association, 2018). I assumed I had a solid handle on what was ahead.
And then
came the humbling part.
Somewhere between opening the first Request For Proposal (RFP) and ironing out an actual Evaluation Proposal with my team, I had a moment of clarity that felt like it had been sponsored by the Dunning-Kruger Effect. You know the one: where you know just enough to be bold, but not nearly enough to be right. That early burst of confidence gave way to the realization that I had miles to go in mastering the art (and math) of evaluation. It’s like Aristotle said (though probably not while wrestling with a dataset): “The more you know, the more you realize how much you don’t know.”
Before I
began the project, I confidently placed myself near a 5 on nearly every evaluator
competency. Looking back, I realize that
I was actually around a 3 at that time. Based on my most recent self-assessment, I
again placed myself near a 5 in most competencies, but this time I have the
knowledge and experience to feel more confident that these reflect my true
competence. One area that continues to
feel solid is my communication—especially when it comes to synthesizing ideas,
adapting language for different audiences, and asking the kinds of questions
that move discussions forward (Competency 1.4 and 4.4). On the other hand, I'm
still working toward fluency in technical domains, especially data analysis and
certain aspects of measurement (2.3 and 3.3). Those are places where I know I
need more repetition and deeper exposure.
The
competency that most surprised me was the emphasis on understanding program
context and values (Competency 3.1). At first glance, it felt like a background
note rather than a skill to develop, but through this project, I saw how
central it is to building an evaluation design that actually serves the people
it's meant to serve. If you don’t understand the setting, the priorities, or
even the politics behind a program, your carefully crafted report might be
received as if it were written in a completely different language.
Looking
ahead, I know that if I plan to go further in the field of evaluation, I’ll
need to continue seeking out projects that stretch my skill set, especially in
areas that aren’t second nature to me yet. I feel very comfortable working with
data and statistics, but not as strong developing understanding thematic
contexts, so I’d pair myself with colleagues who have strong qualitative
backgrounds so I can keep building muscle in those areas. I would also revisit
the AEA evaluator competencies regularly as a kind of map that reminds me that
being “done learning” isn’t really the goal. The goal is becoming someone who’s
willing to adapt, revisit assumptions, and maybe, just maybe, get comfortable
saying, “I don’t know… yet.”
Reference
American
Evaluation Association. (2018). Evaluator competencies.
https://www.eval.org/Portals/0/Docs/AEA-Evaluator-Competencies.pdf
No comments:
Post a Comment