This blog completes the activity with a reflection on what I have learned about my strengths and weaknesses as an elearning practitioner as a result of the activity.
I approached this activity with a little trepidation, well aware of the need to understand personal competencies in order to set development goals, but also conscious that people are often not very good at making accurate self-assessments (Boud & Falchikov 1989, Mitrovic 2001, Ward, Gruppen & Regehr 2002, Eva, Cunnington, Reiter, Keane & Norman 2004). Even after completing the activity, how do I know that my assessments are correct? I suppose one way would be to go down the ePortfolio route, assembling items of evidence against each competency - in fact, would this be a new use of ePortfolios (for assisting students' self-assessment - I'm not sure if I've seen that purpose before).
Given that people (myself!) may not be very good at self-assessment, I was also concerned about trying to make assessments to fit the given levels (Complete novice, Below average, Average, Above average and Expert). These seemed to me subjective, and therefore likely to make assessment all the harder. They also implied a comparison with others, whereas I felt my assessments of my competencies would be valid however they compared to others' abilities. Similarly, if self-assessments are likely to be inaccurate, how can I accurately assess the 'average' of a larger community?
I therefore found myself defining slightly more detail for each competency level, resulting in the following:
Limited knowledge / understanding
Some knowledge / understanding
|Someone reasonably intelligent, with at least a general interest in eLearning. Perhaps able to use/do the competency, but not to analyse/critique/develop. Probably a base level for students of H808.|
Good knowledge / understanding
As above, but with some experience, further study etc in this area of learning/technology.
High knowledge / understanding (expert)
Perhaps able to be an authority or have higher level study in this area, able to teach this element, have greater awareness of issues, practices, debates and resources.
I think the most striking outcome for me is the distinction between technical aspects, where I score quite highly, and other teaching and learning aspects, where I am comparatively weak. This is not a huge surprise to me, given my background of Computing and Psychology followed by AI in Education research. I clearly lack experience in working with elearning students, and in developing elearning tasks, and these are aspects where I hope to gain first knowledge (from this and subsequent courses) and then real world experience.
I also found the objective setting element useful, and something I haven't done for a long time. This in itself is probably a weakness in terms of both my learning and my elearning professionalism. Useful to be reminded of the benefits.
D Boud and N Falchikov, “Quantitative Studies of Student Self-Assessment in Higher Education: A Critical Analysis of Findings,” Higher Education 18 (1989): 529-549.
A Mitrovic, “Self-Assessment: How Good are Students At It?,” in Workshop on Assessment Methods in Web-Based Learning Environments & Adaptive Hypermedia (presented at the 10th International Conference on Artificial Intelligence in Education, San Antonio, 2001), 2-8.
M Ward, L Gruppen, and G Regehr, “Measuring Self-Assessment: Current State of the Art,” Advances in Health Sciences Education 7 (2002): 63-80.
K. W. Eva et al., “How Can I Know What I Don't Know? Poor Self Assessment in a Well-Defined Domain,” Advances in Health Sciences Education 9 (2004): 211-224.
H851 Practice Guide 7, Reviewing and Improving your Teaching, The Open University, 1998, p.31.