Sunday, May 4, 2025

Six Weeks Later, Still Learning (and Laughing at Myself)

Six weeks ago, I wrote about how evaluation is more than just assessment; it’s a thoughtful, multi-layered process grounded in curiosity, communication, and care. At the time, I had just entered the LDT 506 course and was eyeing the upcoming evaluation project. I was full of confidence, freshly caffeinated, and armed with the AEA’s evaluator competencies (American Evaluation Association, 2018). I assumed I had a solid handle on what was ahead.

And then came the humbling part.

Somewhere between opening the first Request For Proposal (RFP) and ironing out an actual Evaluation Proposal with my team, I had a moment of clarity that felt like it had been sponsored by the Dunning-Kruger Effect. You know the one: where you know just enough to be bold, but not nearly enough to be right. That early burst of confidence gave way to the realization that I had miles to go in mastering the art (and math) of evaluation. It’s like Aristotle said (though probably not while wrestling with a dataset): “The more you know, the more you realize how much you don’t know.”

Before I began the project, I confidently placed myself near a 5 on nearly every evaluator competency.  Looking back, I realize that I was actually around a 3 at that time.  Based on my most recent self-assessment, I again placed myself near a 5 in most competencies, but this time I have the knowledge and experience to feel more confident that these reflect my true competence.  One area that continues to feel solid is my communication—especially when it comes to synthesizing ideas, adapting language for different audiences, and asking the kinds of questions that move discussions forward (Competency 1.4 and 4.4). On the other hand, I'm still working toward fluency in technical domains, especially data analysis and certain aspects of measurement (2.3 and 3.3). Those are places where I know I need more repetition and deeper exposure.

The competency that most surprised me was the emphasis on understanding program context and values (Competency 3.1). At first glance, it felt like a background note rather than a skill to develop, but through this project, I saw how central it is to building an evaluation design that actually serves the people it's meant to serve. If you don’t understand the setting, the priorities, or even the politics behind a program, your carefully crafted report might be received as if it were written in a completely different language.

Looking ahead, I know that if I plan to go further in the field of evaluation, I’ll need to continue seeking out projects that stretch my skill set, especially in areas that aren’t second nature to me yet. I feel very comfortable working with data and statistics, but not as strong developing understanding thematic contexts, so I’d pair myself with colleagues who have strong qualitative backgrounds so I can keep building muscle in those areas. I would also revisit the AEA evaluator competencies regularly as a kind of map that reminds me that being “done learning” isn’t really the goal. The goal is becoming someone who’s willing to adapt, revisit assumptions, and maybe, just maybe, get comfortable saying, “I don’t know… yet.”

 

Reference

American Evaluation Association. (2018). Evaluator competencies. https://www.eval.org/Portals/0/Docs/AEA-Evaluator-Competencies.pdf

Saturday, March 22, 2025

Evaluation...More Than Just Assessment!

 In the world of evaluation, professionals play a crucial role in assessing programs and policies to ensure their effectiveness. For my current coursework in LDT 506, I completed a self-reflection exercise on competencies as an evaluator. This process involved taking a closer look at my strengths and areas for improvement in regard to understanding evaluation as a practice.  Using a form provided to me, I looked at each of the five domains presented in the AEA Evaluator Competencies (2018) and used a Likert scale to determine my confidence/knowledge about each aspect.

For the first domain, “Professional Practice,” my average rating was 5.11.  This indicates that I have a solid understanding of the fundamental responsibilities of an evaluator.  For the second domain, “Methodology,” my average rating was 5.07, which indicates that I am very confident in the processes required to conduct an evaluation.  For the third domain, “Context,” my average rating was 5.25.  This is slightly higher than the first two domains, which shows I feel very confident about how to incorporate various factors that influence data into the evaluation process.  In the fourth domain, “Planning and Management,” my average score was 4.5.  This was my lowest-scoring domain, although it is still above the middle score on the Likert scale.  This indicates that I have a conceptual understanding of the practices related to evaluation, but I do not feel as confident about putting them into practice.  The final domain, “Interpersonal,” includes aspects related to effective communication.  My average score across this domain was 5.75 (my highest score) which shows that I feel very confident about my current ability to adhere to these communication standards.

My average across all domains was 5.14, which indicates that I feel I have a strong understanding of both the responsibilities of and strategies required to be an effective evaluator. This is a self-reported value, so there is a bias in the values and realistically I would likely fall a bit lower. Although I have not conducted an evaluation or acted in the professional role of an evaluator, I do believe that I have a strong foundation of knowledge and would be competent in this field.  My primary weakness would be developing skills related to specific technology used in the evaluator role.  I would also need to be trained according to the agency’s preferred methods of collecting data (i.e. surveys versus case studies).

When reviewing the evaluator competencies, I initially felt surprised that so many of them were focused on parts of the job that are not related to actually collecting and communicating data.  My initial assumption was that most or all of the competencies would center on presenting reliable and valid findings; instead, most of them deal with interpersonal considerations like the ability to communicate effectively and incorporate diverse perspectives when developing and reporting evaluation findings.  This was a positive surprise, though, because I feel that the ability to acknowledge inherent biases in a situation should be part of creating evaluations that are able to present equitable data.

Outside of this class, if I were pursuing further training in the field of evaluation, I would focus particularly on learning technology and programs related to this field.  I would also want to educate myself on the types of data collection that evaluation agencies prefer.

 

References

American Evaluation Association. (2018). AEA evaluator competencies. AEA.  https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies.

Sunday, March 2, 2025

Learning About Learning Is Meta

 The word “metacognition” means (in layman’s terms) to think about thinking.  Similarly, learning about the concept of learning is likewise a metacognitive process.  It takes reflection and analysis to not only comprehend the theories presented, but also to synthesize them with the learning you are simultaneously undergoing so that you may have a better understanding of them.

During the EDP 540 course, I was able to engage in this process each week; as new theories were introduced with each new module, I compared them with previous teachings in order to develop a more comprehensive picture of learning design as a whole.  In the field of instructional design, knowledge of these theories creates a foundation which can be used to craft learning materials that are tailored to diverse learners and therefore far more effective.

Behaviorism and Gamification

In this module, I integrated behaviorism and gamification principles into an interactive Desmos escape room. Behaviorism, which emphasizes learning through reinforcement, was applied by structuring the escape room with immediate feedback mechanisms. Correct responses unlocked new clues, reinforcing desired learning behaviors. Gamification elements such as progression and challenges increased engagement and motivation. This approach effectively transformed a boring work training into an immersive experience, making learning both fun and impactful.

starting image of an escape room activityscreenshot from an escape room activity
 

 Humanism and Motivation

In order to demonstrate Keller’s ARCS (Attention, Relevance, Confidence, and Satisfaction) model, I designed an infographic to engage learners in understanding remote work best practices. Attention was captured through bold visuals and compelling statistics. Relevance was addressed by aligning content with real-world remote work challenges. Confidence-building strategies included clearly stated productivity tips, while satisfaction was reinforced by including information about additional resources. This structured approach ensured that learners remained engaged and motivated to apply the information.

an infographic about the benefits of remote work


 Cognitivism and Mayer’s 12 Principles of Multimedia Learning

To demonstrate cognitivism, I created a slideshow presentation that followed Mayer’s Multimedia Principles, which emphasize reducing cognitive overload and enhancing learning through audio-visual processing. Stock photos were used to complement text, providing visual representation of key concepts. The coherence principle guided the elimination of extraneous elements, while color consistency enhanced visual appeal and readability. This intentional design improved information retention and learner engagement.

a screen capture from a slideshow about remote work safety

 Constructivism and Community of Inquiry (CoI) Framework

The Community of Inquiry (CoI) framework was applied to foster social, cognitive, and teaching presence in an online learning community. My activity, "Mentor Meetup," paired veteran employees with new hires for virtual mentorship. This initiative promoted social presence through relationship-building, and cognitive presence by facilitating meaningful discussions.  It also provided opportunities to share and learn from diverse backgrounds and created a teaching presence through guided mentorship. 

a screen capture from a document about a mentorship activity


Sociocultural Learning Theory, Situated Learning Theory, and Communities of Practice (CoP)

My team-building activity named "Secret Ballot" included both Sociocultural learning theory and situated learning. In this exercise, marketing team members anonymously submitted ad concepts, which their peers then attempted to match to the team members who submitted them. This activity fostered collaborative learning, leveraging each participant’s cultural background and unique perspectives. By contextualizing learning within a shared social experience, team members developed a deeper understanding of branding strategies and creative diversity.  Additionally, I included my own personal experiences in the military as a narrative in this activity. I was able to reflect on how the military created opportunities for teams full of diverse people who worked toward the same objectives.

a screen capture from a document about a sociocultural activity

Self-Determination Theory and Moore’s Theory of Transactional Distance

The collaborative project required an application of Self-Determination Theory, Transactional Distance Theory, and generative AI to design an innovative cybersecurity training program. My team developed a quick pitch for a training scenario that would be tailored to the user, but the part that I found to be most impactful was the concept of an AI-powered chatbot that provided on-demand cybersecurity support. Self-determination principles were incorporated by fostering autonomy through self-paced learning, competence via interactive scenarios, and relatedness through team collaboration. We employed transactional distance theory to help conceptualize a training environment that did not feel extremely removed from an in-person learning environment. Late in the development stage, my team realized that revisions were required to align with project requirements.  We communicated effectively as a team and were able to update our pitch to include several key features that it was missing before submitting.

 


Each module demonstrated how learning theories collectively contribute to effective instructional design. Behaviorism and gamification enhanced motivation, while Keller’s ARCS model structured engaging content. Mayer’s principles improved multimedia learning, the CoI framework facilitated community-driven learning, and sociocultural theory fostered collaborative experiences. Integrating these diverse frameworks led to effective, meaningful learning experiences.

Connectivism and Networked Learning

Connectivism and networked learning offer powerful strategies for ongoing professional growth. By engaging with online communities, professional groups, and learning networks, instructional designers can stay updated with emerging trends and technologies. Strategies include participating in Online Forums, joining Professional Organizations, and attending webinars and conferences. By actively engaging in these networked learning opportunities, instructional designers can continuously refine our skills, adapt to new learning technologies, and stay ahead in a rapidly evolving field.

Friday, December 6, 2024

Academics, Algorithms, and the Atmosphere

            One of the newest threats to the sanctity of learning is artificial intelligence (AI), according to many professionals in the education community (and echoed by voices from non-related fields).  The growing fear is that students will just use AI to instantly do every learning task, and they will never develop the ability to think for themselves. While this concern has some merit, those who voice these fears overlook the opportunity to use AI to strengthen the skill in question and how to navigate and effectively use the tool itself.  AI is now part of society’s technological toolbelt, and expecting a student to move through life without ever using it or needing it would be a case of intentional ignorance on the instructor's part. 

            Instead, students should have learning tasks that adapt to include and build on AI.  For example, in the course I recently completed (“Emerging Trends of Technology in Learning Design”), I was asked to utilize AI to construct a narrative story, and then use another AI tool to craft images related to the story that fit the scenes, and finally to use a voice generator to automatically narrate the final product.  The entire assignment was done with AI…and it still required hours of work because I, the human, had to work to analyze and connect the results together in a way that made sense.  By the way, if you want to see the final result, here it is:

 


            This assignment, along with several others that motivated me to draw on AI in many ways, helped me feel that I was using the new technology effectively.  Unfortunately, it also brought on a bit of a guilt trip, because some new research suggests that the AI we are all using is being powered by servers that are harming the environment(UNEP, 2024).  I am the type of person who does not use straws or grocery bags because I worry about the long-lasting harm that consumer habits have on the planet, so finding out that such a useful tool can be doing just as much—or more—damage is disheartening. I find myself currently on the fence between halting the use of this tool and continuing to use it, albeit in measured amounts, while I await more concrete data that can point me toward a definite course of action.

            Using AI during this course provided me with a great opportunity to revisit my original conclusions about how and when I adopt emerging technologies.  I agree with the initial (unfavorable) response from my instructor in regard to my previous assumption about my status on the diffusion of innovation curve. 

             In my initial evaluation of Rogers’ Diffusion of Innovations Theory (Rogers, 1983), I implied that I felt I was an “Early Adopter” because I don’t necessarily create original technology ideas but I don’t wait for the technology to become popular before I start using it, either (Eve, 2024).  My instructor reasoned that this definition actually sounds more in line with “Early Majority,” and while I hesitated to agree with her at first, I’ve come to realize that is very accurate during this course.  When re-analyzing the curve (shown below), “Early Adopters” seem like they are very different from the innovators and thus are using technology that is already tested and ready for widespread use—but actually, these are probably more like the Beta testers.  These are the ones who discover all of the bugs, the glitches, and the things that might accidentally, I don’t know, shut down the local power grid.  It’s not until after they’ve been using the technology for a bit that the rest of the population starts to see it, and that’s where I fit.



            Although I would like to say that I am an “Early Adopter” because it makes me feel like I’m not dragging my heels and holding up progress…the truth is that I don’t have a lot of time to do trial-and-error with new technology.  I am currently juggling more projects and obligations than I probably should if I were honest, and trying to fit in “Product Tester” wouldn’t work very well.  In fact, I turned down an opportunity to be at the forefront of a brand new classroom AI app recently because I didn’t have time to devote to it!  So, yes I am comfortable with “Early Majority” for the time being—but if I start falling behind and end up in “Late Majority” or even worse, “Laggard”, I hope somebody lets me know!

 

References

Eve, H. (2024, October 27). Necessity Is... Blogger. December 4, 2024, https://heathereveldt.blogspot.com/2024/10/necessity-is.html

Rogers, Everett M. (1983). Diffusion of innovations (3rd ed.). New York: Free Press of Glencoe. ISBN 9780029266502.

UNEP. (2024). AI has an environmental problem. here’s what the world can do about that. https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about 

Sunday, October 27, 2024

Necessity is...

 Every technology was new once.  This goes for technologies that we use every day—such as microwaves, computers, and traffic lights—to the technology that brought our society to this point—such as compasses, stirrups, and the printing press.  For each of these technological advancements, along with millions of others, there was an inventor and a small group of innovators who were willing to try their wild new invention. 

I…am not one of those types of people.

Don’t get me wrong, I love seeing and hearing about new ideas.  I cheer for others who are bravely and boldly going where no one has yet gone.  As for myself, however, I prefer to wait until there is just a shred more evidence of reliability and credibility.  In other words, I’ll stand on the shore and cheer for the small group trying out there suspicious looking wooden contraption…but I’m not going to get in it myself until I can see that it holds them up and isn’t sinking.  I'm an Early Adopter.


Image courtesy of  https://theboldbusinessexpert.com/2020/11/02/diffusion-of-innovation-getting-past-the-first-wave-of-innovators-and-early-adopters-to-reach-the-tipping-point/

As a teacher, I’ve spent many years designing lessons, activities, and assessments that I feel target my students’ learning needs while also helping them to stay engaged.  I find this challenging and motivating because it draws out my creative side (which otherwise would be left in the dark, slowly and silently crafting half-formed fantasy novel plots while I sleep).  Even though I have fun creating some things, when I employ technology, I want to rely on structure that has been designed by others—even if not many people have discovered it yet. 

One great example of this is the site Blooket.  Blooket uses the gamification learning model created by Kahoot and later enhanced by Quizizz.  However, instead of simply repeating the question-and-answer approach, Blooket employed a new strategy that challenges students to interact with fun (non-educational) mini-games in between question sets.  These mini-games keep students highly engaged and driven to demonstrate their knowledge.  I first found Blooket last year, before it had gained as much attention as it currently has, and tried it out with my students—it was an instant hit! 

I enjoy trying out and embracing new technology that supports the learning of my students, even it means I have to learn new things.  This is what separates me the most from the later stages of the curve, I believe.  Once I learn a technology, I immediately turn to ways that I can share that knowledge.  Individuals who fall in the “Early Majority” benefit from people like me who are early adopters, because they have a more clearly-defined route for how to learn and implement the technology.  Likewise, the “Late Majority” generally begin using the technology due to social or employer pressure, but have strong guides for how to smoothly integrate.  “Laggards”, I feel, always struggle because they are the ones who resist change and only adopt the technology because there were no (or few) other options.

If I transition from the role of a teacher to a learning designer, I do not feel that my place on the curve will change much.  Although I do enjoy creating, I still shy away from the idea of being the one who creates a technology or a even a system that can be considered a technological advancement. I most likely would continue to utilize newly-tested but not-widely-known technology in order to create the best product possible for my customers/clients.


References

Rogers, E. M. (2003). Diffusion of innovation. (5th ed.). Free Press. http://dx.doi.org/10.1016/j.jmig.2007.07.001

Wednesday, October 9, 2024

The Making Of A Module

Most adults have participated in an eLearning module at some point—either through a work-mandated training course, as part of a self-paced online curriculum for a certification, or even to pursue personal interests by signing up for free online courses.  We have certain expectations now from these courses, but rarely stop to think about the person/people behind the curtains who are addressing those expectations.  Developing an eLearning module from the ground up during LDT 504 course exposed many of the secrets and struggles of the process.

I would like to note that I found this experience much more difficult than I think I would have if I were not working full-time concurrently (as I am sure many other course participants are as well).  Without having as much time to devote to the material, I found that I was skimming through instructions, skipping past non-essential module blocks, and fast-forwarding through lecture recordings to find essentials.  This ultimately resulted in several unfortunate misunderstandings and mishaps that greatly interfered with the flow of my module’s development.

One of the first obstacles that I encountered while creating this eLearning module came in the planning phase.  I prefer a visual planning style, and if possible, the opportunity to sketch something by hand in my first rough draft.  Usually this will not look like much more than scribbles to anyone else, but it makes sense to me.  However, with the introduction of the Twine wireframe tool, I was scrambling to understand how its new features worked and how to input the large bulk of information required.  I didn’t sketch it out for myself because I had no idea what the tool I was using would end up looking like and relied only on how it was coming together on the screen.  I also felt overwhelmed that I needed to dump so much into each of the pieces I added to the wireframe.  Even with a liberal amount of copy-paste to keep formatting consistent, it still felt tedious and frustrating.  As a person who does not usually plan out every fastidious detail prior to execution of anything, that step was probably the worst for me.

Despite feeling behind and overwhelmed for most of this course, there were some parts that I genuinely enjoyed.  I am a creator at heart, and I have a rudimentary understanding of how coding works (although the only coding language I’ve learned has very limited applications).  This helped me to understand concepts in Articulate 360 like “states”, “variables” and “triggers” quicker.  As I began adding in more of the required slides, I started to feel curious about trying new things with the program.  Some triggers that I spent some time on ended up duplicating a feature that was already part of the interface, such as the play/pause button, so I didn’t keep them.  Others, like disabling the “next” button until a certain trigger, I did keep.  I also felt like if I’d had more time with the program while working on the final part of the project (not during the early stages when I was just trying to keep up), I would have experimented with many more strategies.  To provide an example—when I work with the program Desmos (an online website for creating activities), I can spend hours on tasks like choosing which parts of a screen appear or disappear depending on correct/incorrect answers, buttons that cause hints to appear when pressed, screens that reveal answers to previous work but only after work has been completed, graphs that can judge their own correctness, and so on.  I would have liked to explore to that level of detail with this program and then submit a product that demonstrated those types of strategies.

While I am glad that some of the key resources were provided for the design case, there was still much confusion when I launched the project—namely over the comment that I would “receive an email” which actually referred to a faux email included in the design case notes themselves.  I also spent an inordinate amount of time researching a topic that I was definitely NOT a subject matter expert on.  I assume that these three design cases were prepared because it is easier to grade on a rubric; however, if that is not the driving reason behind the choice, I believe that offering students the option to choose their own topic (but provide a limit to branches in the module) would be a great support.  For example, since I am a math teacher, if I’d been able to develop this module about methods for Polynomial Division, I would have been able to gather the materials much quicker and therefore spend more time developing the module features.  I already have many images and examples, and I can speak confidently about each method.  I would also feel comfortable developing assessment questions that accurately match objectives and are appropriate for the audience. 

        Looking to the future, I know that generative AI will come to play a significant role in the development of eLearning modules.  I foresee human-made programs that employ generative AI where a person inputs key details such as course objectives, audience demographics, module style, assessment style, and module length; then the module is designed to match these characteristics and passed back to the designer for editing in order to ensure that all goals are met.  While this is fascinating from a pragmatic standpoint (yay!  Less work!) I think I am more excited about the prospect of virtual reality or at least augmented reality modules.  Just think—Pokémon Go! meets “Safety in the Workplace Refresher Training”—in my mind I am seeing people using their devices to roam around (in a safe way) and “catch” AR cartoons of workers doing something dangerous.  I just hope that the people who make those eLearning modules don’t make them boring… 

Sunday, September 1, 2024

A Passion For Learning...Digitally

Digital learning is a modern phenomenon that brings with it a host of benefits; however, it is not without its share of pitfalls.  I’ve shared some of my personal experiences with digital learning in this blog previously, in the post “The Benefits Of Online Learning.”  In that post, I mentioned that I had a poor experience in the past with an online Physics course, and I mentioned that most of my other experiences have been positive.  While this is still true, I’d like to highlight some additional specific examples to compare. 

One of the worst types of digital learning that I’ve experienced in recent memory is, collectively, the annual trainings that I must complete as a teacher each year.  There is a considerable amount of bias that goes into this judgment, though, because these trainings are created well—they present the required content in an efficient way, accompanied by an assessment at the end.  However, I don’t feel as if I am learning anything new when I click through the slides and let the videos play on mute while I multitask (as I am currently doing).  For the simple reason that no learning is happening, I have to pronounce them ineffective and pointless.  Because these training requirements are largely based on state and federal requirements, there is little room to make changes for relevance; in my opinion the best way would be to increase time between required training from one year to three or five years.  However, the creators of the trainings could add in a question to the assessment that poses a hypothetical situation which is more specific to each teacher’s own classroom.  For example—since I teach in a high school, the training about “blood borne pathogens” might have a hypothetical situation about a scenario in a high school.

One of the best types of digital learning (other than these LDT courses which I’ve discussed in previous posts) came about after teachers had to scramble to find resources for teaching from home during COVID-19.  Prior to this time, I’d used the website teacher.desmos.com for pre-made activities for my students, but did not expend time and effort to learn how to edit the activities.  However, the demands of distance learning put pressure on me to find or create lessons that supported my students—so I started combing through all of the resources available, both from the Desmos site itself and on “How To” YouTube videos, and slowly taught myself how to code using their platform.  This was very exciting to me because I felt like I was learning a brand new and immediately applicable skill that I’d never had before, and my students benefitted from it.  Even though there was no course or instructor, I was absolutely learning in a digital environment and building meaningful skills (that I still use today in my in-person classroom).

When looking at these two experiences in particular, one brazen theme about learning is clear: regardless of the context or structure, I value (and I am passionate about) information that brings me new skills.  Information for information’s sake does not satisfy me as learning; however, if it brings with it actionable steps that lead to new skills—especially skills that can be implemented to support personal or professional goals—then I will be engaged and motivated to succeed.  

Six Weeks Later, Still Learning (and Laughing at Myself)

Six weeks ago, I wrote about how evaluation is more than just assessment; it’s a thoughtful, multi-layered process grounded in curiosity, co...