Saturday 26 February 2011

A Wordle moment

This week, Tony produced a really nice Wordle of the criteria for evaluation an elearning innovation that we've been discussion in our forum - a really nice way of visualising the big themes and issues.

Inspired by Tony, and remembering that I used to enjoy Wordles in this blog back when studying H808, I fed Wordle the current RSS feed for Allies in eLearning...

Wordle: Allies in eLearning feed 26Feb2011

Wednesday 23 February 2011

Early adopters or early majority?

Moving on from Rogers' model, we're now looking at Geoffrey Moore's Chasm Theory, which questions the Innovation Diffusion model that the opinion leaders (early adopters) hold the key to product diffusion. An account of Chasm Theory shows that Moore argues that it is vital to target the early majority if a new product is to spread to most consumers. He argues that there is a ‘chasm’ between the early adopters and early majority.

Here are my musings after reading the Tanahashi account.

If, as this article suggests, innovators tend to focus on novelty value, I would think it’s not surprising that innovators in education are relatively few - real educators know they need something more substantial than something flashy and new. [Diversion: This reminds me of a nice piece about Clicky Clicky Bling Bling elearning ("eLearning with lots of whiz, lots of bang, lots of clicky-clicky in a lame attempt to add pizzazz to dry content and to make it more engaging... when you unwrap the sparkle, you’re left with junk")]. So it’s the early adopters (termed opinion leaders in this article) who need to know what the new benefits are of an idea. I think in a sense this is rather disparaging of innovators, suggesting that novelty value is inherently cheap and nasty. It can’t be, or the early adopters wouldn’t have anything to pick up when they’re evaluating the value of an innovation, but clearly not all innovations are good, and some of those which aren’t, won’t be adopted.

The point about the formation of word-of-mouth networks around opinion leaders I thought was an interesting one. Clearly it’s a well known adage of the marketing world, but we also see it in elearning. I think an example of this in action might be the buzz around e-portfolios. There are plenty of institutions now saying “these are great, we need to get them, let’s build them into our VLE” etc, but fewer who have really taken the time to understand what need they are trying to meet, how e-portfolios will help, and what they need to do to make it a successful intervention (besides just running the install script and hoping for the best). These institutions are following the opinion leaders, because they’ve heard it’s the thing to do.

Marketeers (is that a word?!) are well aware of the influence that opinion leaders can have over other consumers, and the article notes that opinion leaders are said to hold the key to product diffusion. This made me think of the Stephen Fry effect - a celebrity with a known penchant for new consumer technology, and a prolific user of various social media. I don’t have any stats on this, but I’m sure that any product which elicits a Tweet from Stephen Fry will see web searches, and most likely sales, spike as a result. I don’t think the elearning market is as quick to jump as buyers of consumer electronics are. There are greater political and financial pressures at play, and snap purchases just aren’t feasible. However, that’s not to say that poor purchases are not made by institutions which seem to believe a technology is going to solve a problem, even without the necessary implementation plan to support it.

Another interesting point was regarding the fact that when a product is adopted, its uses begin to differ from those originally intended, and it is these new uses which can earn it its place in the market. I haven’t come up with an elearning example of this yet, but I'm sure there are many - answers in a comment please!

Moore’s Chasm Theory suggests that there is a hidden chasm which impedes diffusion of a product to a larger market beyond the initial innovators and early adopters. He suggests the reason for this chasm is that early adopters want to stay ahead of the competition with products that nobody else is using, but that early majority consumers will want to keep up with the competition, knowing that lots of other people are using that product. I’m not sure this is entirely consistent with the articulation of Rogers’ Innovation Diffusion, as I understood that to suggest that the early adopters want to know the new benefits of an idea before they buy into it. Perhaps they can do that as well as staying ahead of the competition. Chasm Theory suggests that because early majority consumers take up a product only when plenty of other people are using it, if it is only being used by a limited number of early adopters then the early majority will be reluctant to purchase.

Moore (or perhaps Tanahashi - I’m not sure where the statement comes from) therefore suggests that there is no motivation for early majority consumers to buy a product. This I found a bit confusing, because clearly something which does happen which allows an innovation or product to move from early adopter to early majority take-up. The article argues that for a product to make the jump to early majority purchasing, it’s necessary to show that early majority population examples from the very beginning. I thought this point was interesting as I can relate to it from an elearning and mainstream consumerism point-of-view. As we’ve noted, early adopters want to know what they get which is new, and the early majority need to know that they get secure benefits with minimal risk. Both groups can be supported by the provision of examples, and it’s this desire and need to see something in action before committing that seems common in education to me. It also relates to the trialability feature mentioned in the course notes - something is more attractive if it can be trialed, and benefits demonstrated with minimal risk. It also made me think of the advertising that we’ve seen recently for Windows 7 and iPad

The course notes also prompted us to consider the strengths and weaknesses of Moore’s Chasm Theory. I was generally confused by the argument that the early majority won’t buy/take up something until they’ve already seen lots of people (presumably the early majority) already using it. But how could those people be using it if they wouldn’t have wanted to buy it because too few people were already using it? It sounded like a bit of a chicken-and-egg problem to me, and while the argument that early majority need to be provided with examples in order to encourage them seemed fair, I’m not sure this made the basic maths of it make any more sense to me.

In implications for elearning generally, I think it reiterates the common-sense idea that to move beyond low take up by just the opinion leaders, you really have to give a good reasons to people for them to commit to a new innovation. This is clearly true in tough economic times, but also relates to non-financial aspects - educators need to know how and why this innovation is going to improve some aspect of teaching and learning, and whether that is sufficient added value from the current situation to make the commitment to jump (which is more than just financial, it has far wider cultural and organisational implications).

Thinking of trade shows in the training and education world, I think there are a lot of stands which still attempt to sell based on their new shinyness, or on being cool, or just because they incorporate the latest consumer electronics - that must be good right?(!) Moore’s Chasm Theory might suggest that this type of advertising will be sufficient to attract the early adopters (who want to be a bit different to the majority), but that it’s unlikely to win over the early majority who need to know other people are using something. Generally though I think that educational institutions need to be so careful with their buying decisions, that marketing is unlikely to have too much sway.

The course notes suggest thinking about whether you would promote an elearning innovation differently to clients/colleagues depending on if you saw them as early adopters or early majority. The question leads you to speculate whether you spend more time demonstrating the software to one group rather than another. To be honest I find it really hard to imagine a situation where an audience would be satisfied if I didn’t demonstrate the software to them. Yes, perhaps you show the groups different aspects (you would tailor it to the client anyway), but I don’t believe that most organisations, and certainly not educational institutions have the resources to spare that they could just jump on a new innovation without some solid evidence about why it was appropriate for them.

References:
Bean, C. (2010) And a Clicky-Clicky Bling-Bling to You! [online] http://cammybean.kineo.com/2010/12/and-clicky-clicky-bling-bling-to-you.html
Moore, G.A. (1991) Crossing the Chasm: Marketing and Selling High-tech Products to Mainstream Customers, New York, HarperBusiness.
Rogers, E.M. (2003) Diffusion of Innovations (5th edn), New York, Simon and Schuster.
Tanahashi, H. (2005) The Innovator Theory [online], Tokyo, Mitsue-Links http://www.mitsue.co.jp/english/case/concept/02.html

Monday 21 February 2011

Will a potential innovation 'fly'?

This week's course notes adapt a framework for considering innovation, based on Rogers (pp. 229). In order to consider an innovation's strengths and weaknesses, the framework offers the following criteria:
  • Relative advantage - is it a much better way of doing something, or a valuable improvement?
  • Compatibility - does an innovation fit with the values, experiences and needs of potential adopters?
  • Complexity - will complexity of the innovation, or skills or concepts it requires, make it hard to adopt?
  • Trialability - can adopters trial and innovation before committing to it?
  • Observability - can others perceive the advantage of the innovation?
In the following post, I reflect on my own experiences of innovation, and whether I make calculations using this type of framework.

What have I innovated?

The chatbot, allowing learners to negotiate about their learner model and reflect on their knowledge. But this was research - I don’t feel that was quite the same as innovation. I think perhaps an activity/intervention/method needs to have adopters in order to be an innovation (although clearly not the only criteria), and the fact that the research was only extended to trial populations means it can't be said to have been adopted. It was an innovative elearning tool and method though, regardless of the fact that I didn’t have the means to develop it further and make it adoptable.

Did I ask questions from the Rogers’-adapted framework?

Not knowingly. However, I must have weighed up the relative advantage - if I hadn’t believed that there was some improvement possible through the chatbot integration, then I wouldn’t have bothered. Similarly compatibility - it was not such a great step from the existing IT experience of potential users. Clearly compatibility was in my mind to some degree, but more from the need to ensure the HCI was good. Complexity - yes it was complex; that’s why it was research. It’s also probably why this field of research hasn’t made the mainstream - it’s expensive to design and produce robust, nuanced,personalised and scalable AI products! Trialability - again as it was research, designing a trial was what it was all about, as was observability - improvements had to be objectively demonstrated.

I think the framework is really hard to apply to an innovator - I've struggled above. It seems to make much more sense as something which adopters of innovation would employ (knowingly or otherwise)

So, what have I adopted?

Use of online chat.
1997- using online chat wasn’t widespread. I only had the option to adopt this because my parents provided the Internet access - there wasn’t any financial risk to me (although there were phone bills, risky to a 17 year old!)
Relative advantage - Yes, but it wasn’t just a better way of doing something, but a completely new way of doing something I couldn’t do before - I didn’t have to weigh anything up.
Compatibility - Clearly fitted with my values and needs. The technology was easy to learn, with rapid positive effects. I don’t recall making any kind of assessment.
Complexity - Again, not a technical challenge, so not something I thought through.
Trialability - Highly try-able - it was there, inviting being tried. And there was nothing to be lost if I tried it and didn’t like it.
Observability - It’s easy to demonstrate advantage when it’s a brand new tool!

And again, did I ask questions from the Rogers’-adapted framework?
Certainly not!

So perhaps a more recent example of adoption, or in fact decision not to adopt...
Smartphone/iPhone/iPad

I haven’t adopted these. I have a laptop with home broadband, and office computer also with Internet facilities, an iPod for music/podcasts and a mobile phone with a half decent camera built in. I believe I can read email on it and view mobile web on its tiny screen, but I’ve never read email on it, and not used the web capability outside the first month free trial. I think that’s what this lack of adoption comes down to for me - the relative advantage compared to the facilities I already have just doesn’t seem like something I want to pay for. So, for me, the most helpful of the five criteria in explaining my own adoption (or lack of it) is relative advantage.

Reference
Rogers, E.M. (2003) Diffusion of Innovations (5th edn), New York, Simon and Schuster.

Sunday 20 February 2011

Rogers' five adopter types

This week raises the question of who innovates, and who follows. A classic text on the subject is Diffusion of Innovations, by Everett Rogers, first published in 1962, but on its fifth edition in 2003. Depending when they take up an innovation, Rogers classifies people as ‘innovators’, ‘early adopters’, ‘early majority’, ‘late majority’ and ‘laggards’. Below are some of my early thoughts on Rogers' five adopter types, and some consideration of how they correspond to my experience with people's attitudes to innovation and how they might apply to contexts I know.

My first thought is that Rogers’ categories seem reasonable enough - common sense perhaps. This doesn’t in itself mean they lack value, but I wondered what makes this a model and not just a description. Are they the same (in this case at least)?

I was surprised and even a bit shocked at the statement that later adopters are “generally less empathic, have a lower ability to deal with abstractions, have fewer years of formal education”, etc. This seemed to me likely to be a bit of a generalisation. Surely there are wider factors than individual personality at play, especially within organisations? I wondered, was there any actual correlated evidence, use of personality tests etc? It made me question whether other aspects of what Rogers has to say might also be more personal opinion than testable fact.

Thinking about how Rogers’ categories correspond to my own experience of people’s attitudes to innovation, I found it easy to apply the categories to technology take-up, but wasn’t sure if this was really the same thing as innovation. It made me wonder that perhaps because innovation is such an early stage in the process, we don’t see it often, and that would be why I’m finding it hard to think of innovators (as opposed to people who just like to buy the latest piece of consumer electronics).

I tried to think how Rogers’ model relates to two contexts I’ve work/ed in. Firstly, an educational technology research group within a univeristy Engineering department. Here I found that although someone can be at the cutting edge of their field (Ed Tech), they can be part of the late majority or even ‘laggard’ with other innovations - even to the point of lacking awareness of technologies which have become mainstream IT in their context (e.g. referencing tools, software design tools (such as the UML) or web site building tools). At the other end of Rogers’ spectrum, that same person might be perfectly capable of introducing some quite innovative concepts into their teaching - suggesting to me that like innovations, which our forum discussions have suggested are dependent on context, so too are Rogers’ adopter types. I also found myself speculating a bit about the difference between research (even educational technology research) and elearning innovation. I wondered if it was partly that because academic research has a tradition of being grounded in the preceding literature that it tends to progress in small steps (although innovative leaps are possible too). It could also be to do with the facilities available to research groups - while they might develop an innovation, without sufficient participants to try it out, it might never really get taken up. Also research funding arrangements may have an impact - if it’s risky to show your project has ‘failed’ then you might avoid even attempting anything too innovative.

I also thought about how Rogers’ model might apply to the commercial environment of a technology-based training development wing of an IT consultancy company. In the commercial world there is clearly a need to supply something which customers will pay for. If your customers generally want what they already know (e.g. Powerpoint-like support for instructors, or relatively didactic CBT), and are not ready to accept innovation, then it can be hard for a company to be innovative - after all, they will not risk losing customers.

Reference
Rogers, E.M. (2003) Diffusion of Innovations (5th edn), New York, Simon and Schuster.

Saturday 19 February 2011

Innovative practice with elearning - reflections

A few reflections on the case study reading...

Some of these case studies do not sound particularly inspirational. While they undoubtedly had tangible benefits, were they innovative? Again, back to the question of what it means for elearning to be innovative. Well, a number of these practices (providing resources online, developing podcasts, computer-based assessment, etc) sound like they are old hat. However, given an analysis of actual current practice in HE institutions (all these case studies were HE), I wonder whether we really would find a prevalence of these elearning practices. More likely, I suspect, that while there is greater awareness of the tools and technologies and the potential of elearning, and that the technological barriers are ever decreasing, perhaps many of these practices have not yet made the transition to mainstream. In that case, looking back on these case studies of 5-7 years ago, perhaps they really were innovative. Also the fact that we now accept that practices such as those described seem familiar, it suggests that they are being adopted and integrated into wider practice - and surely that must be test of whether an innovation really makes a useful difference.

I’m struggling a bit to think how these case studies can relate to my own context - a technology consulting company with a CAI/CBT production wing. We have to produce what customers want, and if that’s what they ask for... I know there is an argument to be had about helping customers see what they want but didn’t know they wanted, but that’s for another day!

While I accept that these case studies might be argued to have had effects on social equality (they claimed to - one of the reasons I selected them!), I think that the link is rather tenuous for a couple. I think they seem to be claiming that online 24/7 access equals equality - I’m sure it’s a contributory factor, but I was hoping to see a bit more than that.

I liked the fact that reading these cases studies made me reflect not just on what value the technological intervention might bring, but also issues surrounding the implementation. These are crucial for successful uptake, and key lessons highlighted in the cases include the need for substantial stakeholder involvement at as early a stage as possible, and the importance of training in ensuring that everyone understands why you are pursuing this elearning avenue.

The cases showed some nice contrasts between large and small scale, school-wide and single-instigator, big-budget and no-budget projects, and all were able to demonstrate tangible benefits.

Innovative practice with elearning - case study 4

The final case study I selected was

Swansea University Case Study: Use of podcasting in Archaeology

I picked this case study partly because I’m not sure what the fuss is about podcasting, so I wanted to see what benefits had been found. I can see it as a nice means to provide bite-size chunks of portable materials, but what else? I must be missing something.

This particular project involved the production of podcasts combining video of an archaeological site or object with an ‘expert’ commentary, made in the field. Commonly used black-and-white images in text books can be unsatisfactory, so this initiative aimed to allow the lecturer to focus on a specific issue and provide appropriate commentary.

The podcast material was prepared for use by both undergraduate and MA students. First years in particular have often not had the opportunity to visit archaeological sites and so find them hard to visualise. This project took videos while working in the field, and sometimes added commentary at the time or otherwise recorded it separately after the event. One of the aims was also to focus on student-centred learning and encourage a collaborative experience and reflective learning.

Podcasts were added to existing modules and were intended to fill identified gaps in student knowledge. While they were offered as supplementary material, they were integrated into the course’s existing VLE, allowing students to find the relevant material as they progressed. Evaluations suggested that students performed better in image recognition examinations and also appreciated the availability of digital material. Students also responded to staff podcasts by creating their own pieces and it is hoped to encourage them to share these or combine them into a resource on the VLE.

Tangible benefits reported include:
  • Allowed research/fieldwork to be integrated into the undergraduate scheme, helping students understand the development of the subject and research from an early stage of study.
  • Students felt they were engaged with the fieldwork - presumably this was motivational.
  • Offering commentary seemed to help students engage with the sites in a more meaningful way
  • Possible uses include using podcasts as a museum or site guide, using them to prepare for a real site visit, offering access to archaeological sites for students with mobility problems, to illustrate a specific point in a lecture or even as tailored support for a student who does not understand a particular issue.
  • Benefits the lecturer through being required to think about how to present/discuss complex issues.
  • Effective in enhancing the range of learning materials available to students.
My thoughts:
  • I noted that this case study seemed to be reporting the work of predominantly one lecturer, working alone, using images for which he owned the copyright, and largely without institutional support. This was in start contrast to the extensive budgets, planning and development teams of the medical school VLE and computer-based assessment projects of the previous case studies. A nice illustration that with a little knowledge, and the use of widely available technology, subject matter experts can produce valuable elearning materials - without extensive budgets or IT support.
  • Use of the materials by both undergraduate and MA students is nice example of re-use or re-purposing of e-materials.
  • Integrating the podcasts into the existing VLE sounds simple, but is worth noting - supplementary materials which students can’t find or can’t relate to other aspects of the course are of reduced value, so this was a good move.
  • A major drawback was in production time - about 1 hour to compile a 3 minute podcast. This was particularly onerous given that only one lecturer was producing materials. However, podcasts are clearly re-usable and ideal for sharing with colleagues and more widely, so this cost could be reduced when bespoke podcasts are not required. 
  • The case study largely focused on the value the lecturer's podcasts offered to the students. However, the opportunity for students to respond with their own pieces actually offers very real potential for excellent reflective and collaborative learning opportunities which were not extensively discussed. 

Innovative practice with elearning - case study 3

The third case study from the JISC Tangible Benefits publication was

The University of Nottingham Case Study: Moving from Optical Mark Recognition (OMR) to Computer Based Assessment (CBA) for summative exams in medicine

The Nottingham Medical School used optical mark recognition (ORM) for scoring exams, and wished to move to computer-based assessment (CBA) instead. There were two key drivers for this, namely the time pressures of marking (with OMR each exam paper must be scanned, so marking time increases in line with class size) and desire to incorporate images (e.g. microscope and x-ray slides) into questions, which was difficult to do with sufficiently accurate reproduction on paper. There was also the attraction of being able to use interactive question types such as drag and drop labelling and image hotspots.

The case study appears to relate to 2003/4 when the Medical School began developing the TouchStone online assessment system and populating it with the question bank. At this time there was no institutional steer regarding computer-based assessment, and very few departments were using online assessment for high-stakes summative assessment.

The problems anticipated with moving from an OMR to a CBA approach were actually administrative, rather than technical or attitude-related. There was a difficulty finding computer labs large enough to examine the whole cohort simultaneously, and there were concerns about plagiarism that might be afforded by online exams (i.e. seeing other student’s screens and accessing forbidden materials during the exam).

This case study focused a lot on getting the electronic workflow process to support online exams right. This was clearly a large project with many stakeholders involved, including academics, administrators, subject matter experts, external examiners, disability experts and IT support. There was an awareness that by computerising all aspects of the workflow, quality could be enhanced by ensuring backups, collaborative working if any stakeholder was unavailable (within appropriate permissions-based access) and transparency of work.

Tangible benefits identified include:
  • electronic access permits all stakeholders to access and work on their part of the assessment process. Also electronic storage of questions, past papers, student profiles and exam results can be analyses statistically.
  • marking time was greatly reduced - has positive impacts on scheduling subsequent activities such as moderation, follow up exams, etc
  • ability to include multimedia questions types, in particular image labelling and hotspots. Labelling and hotspot questions are similar to multiple choice with a diagram, but reduce cognitive load. Hotspot questions also reduce the probably of guessing correctly
  • online assessment makes it simple to accommodate some disabilities such as dyslexia and other visual problems
  • students can change their answer as many times as they wish before submitting - previously multiple marks on an OMR form could cause scanner errors
My thoughts:
  • This appears to be a good case study of process improvement through applying technology - there have been cost and time savings as a result
  • Technology also afforded better assessment - interpretation of various medical images is a key skill, but could not be adequately assessed by the previous method
  • Involvement of all key stakeholders from the beginning of the process was again identified as necessary for success - this is particularly important while there are staff who are IT-averse or shy.
  • The case study pointed out the inherent risks of online assessment, in particular the reliance on servers, client computers, network and power all being necessary for an exam to run. While this does appear to be adding risk to the exam process, I suspect that as daily life makes ever greater expectations of availability of these systems, prevention and mitigation of these risks becomes easier.

Innovative practice with elearning - case study 2

My next case study from the JISC Tangible Benefits publication was

Newcastle University Case Study: Use of a Virtual Learning Environment (VLE) to deliver a 'regional' medical school

This case study describes an initiative by the medical school at Newcastle to collate their learning resources online and organise them in format linked to the medical programme. There were a couple of key drivers for this, but the most notable was the pre-VLE reliance on paper-based resources and the requirement to better meet the needs of a dispersed population. In particular, students in years 3-5 of their training are not physically located at the campus, but were required to return to the medical school for a day a week in order to collect paper notices and resources. This scenario seems almost unimaginable these days, but it’s hard to tell when this project was dated. Elsewhere in the case study they refer to the predominance of dial-up Internet, broadband not yet being widely available - does this date it to somewhere pre-2001-ish? In addition to the constraints on uploading materials implied by dial-up, they also expected initial staff resistance and a slow take up given the significant change from current practice.

They had a number of fundamental philosophies in the design and integration of the VLE, including:
  • data is entered once and re-used as far a possible
  • the VLE should provide a user-friendly content management system, accessible to non-technical administrators
  • non-technical users are empowered to independently manage their content online
  • open-source software should be adopted whenever possible
  • presentation and content is customised for the individual, based on their role.
The first version of the VLE was rolled out as an additional tool to existing working habits. Users were not forced into the change, but allowed to familiarise themselves at their own place, in additional to student and staff training sessions. Functionality of the VLE was gradually expanded, allowing users greater support/tools without being an overwhelming start. Within a short space of time the VLE grew to become and integral and embedded component of the degree programme.

Staff uptake was initially tricky, as expected. However, initial ‘champions’ were vital in the system’s success, promoting it. Students quickly adopted the VLE and were grateful for it, putting pressure on other staff to contribute to it. Unexpectedly there was a conceptual barrier in that some non-technical staff could not initially understand the service the LSE could provide.

Tangible benefits included:
  • significant improvements in student learning, assessment, pass rates etc. Attributed to 24/7 access to VLE and scope for independent learning
  • significantly improved student satisfaction with the learning process
  • significantly improved staff satisfaction with elearning - and the services and feedback provided has motivated staff to improve their resources. This in turn has let to staff empowered to upload, manage and maintain their own material and to more general improvements in staff performance, widening participation etc.
The case study reports the VLE as being something of a victim of its own success, with new features expanding the horizons of users resulting in them demanding more. High usage at peak times also necessitated the purchase of a separate server. However, both of these are good problems to have!

My thoughts:
  • Rather like the first case study, this initiative was about making resources available online, although with different intentions.
  • The clearly articulated underpinning philosophies appeared very useful in guiding the development of the project, and combined with the stakeholder development group, ensuring the needs of various user groups were understood from the outset, allowing development of the ‘right’ tool
  • Unexpected lack of conceptual understanding among staff is similar to the previous Exeter case study - underlines importance of ensuring all involved know why the development is occurring
  • Students rated the VLE highly - I find it hard to imagine a university relying on paper in the way this previously did, and suspect that as schools and colleges also adopt VLEs, that students will now assume to have this type of provision as standard.
  • This was another ‘putting things online’ example, but it demonstrates significant benefits, not least because of the difficult distributed working environment before the VLE initiative. There were low expectations and experience to start with, but this grew very positively in acceptance and outcomes.
  • Providing content on a VLE isn’t that innovative now, but the radical changes it made to the geographically dispersed population in this study shows it was genuinely innovative at the time. Perhaps this also illustrates a good test of an innovative solution too - that it was not just a passing fad, but is now a recognised, even expected, way of doing business.

Friday 18 February 2011

Innovative practice with elearning - case study 1

My first case study from the JISC Tangible Benefits publication was

University of Exeter: Case Study: Online economics texts

My initial thought was that “online texts” sounds like an example of the repository-of-static-texts building that we so often lament when it is argued to be elearning. (I’m not claiming that online texts are not a part of elearning tools and strategies, but that to limit oneself to that one kind of implementation is to miss the wealth and variety of practice that elearning can bring). So, I wanted to see what had been done here which was interesting, valuable, and even innovative.

This case study, although undated, appears to report an initiative from 2006/7.

The School of Business and Economics at the University of Exeter has moved towards the provision of online formative exercises for several modules. Motivations for this include:
  • increased student numbers with very diverse mathematical background on some modules - raised pedagogic issues they needed to address
  • a need to increase the amount of formative assessment and associated feedback
  • a need to cater for international students and students from outside the School
  • a need to have core modules which can be delivered by a number of people including temporary lecturers
Most students in the Business School are between the ages of 18 and 22 and enter with three high grade A levels. They are generally available to attend all lectures and classes and do not have family responsibilities. The School has already found the use of WebCT invaluable in ensuring that all students are able to access materials in advance of lectures.

However, some classes had attendance at 60-75% - lower than desired. Students were often not preparing exercises in advance of class, reducing the value of this contact time. While retaining lectures as the primary learning/teaching tool (in order to meet institutional expectations), the School introduced online exercises to be undertaken between lectures. They linked each lecture to an exercise in order to reinforce both elements. This allowed contact time to be used more productively on discussions, presentations, etc.

They anticipated a number of challenges, including:
  • student expectations of weekly classes/seminars etc
  • students not familiar with e-learning as not used in previous modules
  • buying the set book was required in order to access the publisher’s online resources
  • resources were not embedded within WebCT, where students might have expected to find them
  • largest anticipated challenge was whether the students would actually do the work each week
Students and staff were trained on using the materials at the start of the modules/term. The activities were provided by the text book publishers and so were linked to the academic content of lectures. They included accessing case studies, videos and completing exercises such as multiple-choice, essay or graphical questions. Results were sent to the lecturers.

During the modules, the numbers of students logging in and completing assessments were recorded. Informal feedback from students was sought during the module and more formally afterwards. A record of log-ins was compared with exam results. There were concerns over the number of students completing the assessments each week, although numbers/percentages are not given. Students apparently did not cite unwillingness to buy the book/access code as a reason for not completing the assessments. Problems were identified in the dissemination of the approach to support staff, some of whom told students that 'There aren't any classes, just lectures', suggesting that the online classes were not proper. This issue was addressed in stuff training for the following year.

Tangible benefits were noted as including:
  • pass rates/ average marks equivalent to the pre-elearning implementation of the module
  • high student retention - i.e students progressing to next course
  • staff reporting enthusiasm for elearning - previously they had been concerned about the cost of designing materials, and use of publishers’ resources avoided this
  • e-resources mean all students on the module have access to the materials - and can access them at their own pace, revising if necessary
  • savings in staff time (reduced tutorials) and space (reduced classes)
  • the publishers’ materials were extensive - more than they would have had capacity to design and produce in-house
My thoughts:
  • Requiring students to purchase a book in order to use e-resources doesn’t sound very progressive. I appreciate however that this is a stipulation of the publisher, and that academic publishing more widely hasn’t yet figured out how to monetise the provision of online content in the way it can books or paper journals.
  • Case study states that staff will meet to reflect on the approach, but doesn’t report the outcomes of this.
  • No discussion of any correlation between recorded log-ins and exam results.
  • Some students said that they worked in pairs and so only one name was recorded - therefore records of participation/activity completion may be inadequate. Also, if students are penalised for not having an activity recorded in their name (the study does not suggest that they were) this would be anti-collaboration.
  • This does seem to be a case of just putting activities online, although they were integrated with face-to-face teaching.
  • The use of publishers’ materials doesn’t sound particularly innovative, but by sourcing pedagogical sound materials, and not attempting to re-invent the wheel, they were able to provide materials more extensive than would have been possible in house, and without putting off staff who were uncertain about the costs of implementing elearning - in fact they were convinced of its benefits
  • There was a mention that there were concerns that interactive discussion opportunities wer being lost as the class was replaced with online activities. However, apart from the fact that the lack of discussion in classes was one of the reasons for moving to online activities, I wondered if the new online approach could have been modified to include discussions or interactive/collaborative activities.
  • Generally this seems to be an example of a well planned and carefully supported implementation, which brings tangible benefits without the need to be revolutionary. Using publishers’ materials is not particularly ‘innovative’ (surely publishers wouldn’t be going to the trouble of producing them if no-one is paying?) but it demonstrates good practice in the successful integration of quality elearning materials and activites into existing lecture courses.

Innovative practice with elearning

In 2008, JISC produced a publication Exploring Tangible Benefits of e-learning: Does Investment Yield Interest? (2008) which looked at the impact of innovation on the learning process through the use of elearning. It includes case studies of nearly 40 examples of elearning, discussing their backgrount, context, technology used, tangible benefits and lessons learned. The case studies are available from JISC at
http://www.jiscinfonet.ac.uk/case-studies/tangible.

Week 2 Activity 2 asked us to look at 4 case studies. I decided to choose case studies that illustrated effects on learning and effects on social equality, these being two of my personal prime drivers for being interested in education, and technology-supported education. I was a little surprised that although 29 of the 34 case studies reported an effect on learning, only 20 reported an effect on social equality, and the cross-over was further reduced. However, I picked these four, with some additional influences on my selection, which I note in my posts about each study:

University of Exeter, Online economics texts
Newcastle University, Use of a VLE to deliver a 'regional' medical school
University of Nottingham, Moving from Optical Mark Recognition (OMR) to Computer Based Assessment (CBA) for summative exams in medicine
Swansea University, Use of podcasting in Archaeology

My summaries and thoughts on the case studies to follow...

Reference
JISC (2008) Exploring Tangible Benefits of e-Learning: Does Investment Yield Interest? [online], http://www.jiscinfonet.ac.uk/publications/publications/info/tangible-benefits-publication

Wednesday 16 February 2011

Matrix of elearning concepts - mark 1

This is a quick post as I'm a bit behind this week. Hoping for some big catch up sessions on Thursday and Friday... trying not to panic yet!

Using the definitions of elearning concept we'd found, the task for Week 2 Activity 1 was to arrange those concepts on a matrix, showing where they might be categorised on the spectrum of Existing to New and of Formal to Informal.

This is my matrix:

A number of the concepts had me wondering about where to place them (obviously the point of the exercise!), so I also recorded some notes about why I positioned things as I did:

Blended learning - blend of new and existing, formal and informal. The fact that it includes web-based learning etc suggests it tends towards the new and informal.

Mobile learning - reliance on mobile technologies and cannot we yet assume ubiquitous access, therefore still quite 'new'. While may be used for formal access, gives greater opportunity for informal learning.

Virtual communities - initially largely informal. As there is wider uptake and integration into mainstream education, do they become more formal?

Flexible learning - I don't think this has yet been taken up by majority educators. Not yet integrated into most programmes.

Work-based learning - apprentice schemes etc have long history, therefore recognised as 'formal' training. However, OTJ aspect means training may be delivered informally.

Personalisation - widely available in informal, not-necessarily-learning-focused environments. Still a difficulty in achieving personalisation when there is a syllabus to be taught, standards to be met and need to check all material is covered, etc.

Just-in-time learning - relies on access to resources, therefore predominantly a new web-dependent method. No formal teaching - students learn what they want, when they require it. Risks lacking broader understanding as likely only to learn what is sufficient to achieve a task. May be some deep learning as result of applying it to context though.

Peer assessment - does not require new technology, but wider uptake *is* facilitated by web, etc. Not so widely used for formal (summative) assessment, and technology facilitates wider opportunities for informal assessment (e.g. commenting on blogs etc).

Collaborative learning - again doesn't require technology, but additional forms of collaboration may be facilitated by ICTs. Widely recognised as valuable medium, but still seems to be used mainly as an adjunct to more traditional approaches.

Learning objects - developed specifically for LMS-delivered computer-based learning, therefore 'new'. Standards, metadata requirements etc are a way of formalising the development. Usage is not restricted to formal context though.

E-assessment - computer supported marking available probably 20 years. Wider and more flexible uses are a newer development. I use assessment here in a summative context, therefore position towards formal axis - more informal, ad-hoc, on-demand, non-recorded or just-for-fun formative assessment is also possible though, and ICT is prime facilitator of this.

I actually found the formal-informal dimension quite hard to use, which was something of a surprise. I think I need a bit more of a think about what this really means. Also, I have a feeling that there might be some other dimensions on which it be useful to arrange these concepts. A lot of the discussion and reading in week 1 seemed to indicate innovations in elearning have particular implications for collaboration and community, so I wonder if an individual - collaborative axis might be useful to look at. This post is optimistically titled mark 1 - I'm hoping there'll be a chance for a mark 2, but don't hold your breath!

Sunday 13 February 2011

Elearning concepts

This post is hardly going to be the most insightful. It's simply the report of the result of Googling a number of elearning concepts, in search of definitions. More analysis of these concepts, and how they relate to new or existing and informal or formal learning coming up in the next post... (I hope!)


Blended learning
Mobile learning
Virtual communities
  • Synonymous with Online community: “A meeting place on the Internet for people who share common interests and needs. Online communities can be open to all or be by membership only and may or may not be moderated.”[http://www.astd.org/LC/glossary.htm] 
  • A virtual community is a social network of individuals who interact through specific media, potentially crossing geographical and political boundaries in order to pursue mutual interests or goals. One of the most pervasive types of virtual community include social networking services, which consist of various online communities. The term virtual community is attributed to the book of the same title by Howard Rheingold, published in 1993.[http://en.wikipedia.org/wiki/Virtual_community]
Flexible learning
Work-based learning
  • Work Based Learning generally describes learning while a person is employed. The learning is usually based on the needs of the individual's career and employer, and can lead to nationally recognised qualifications. There are usually three components to Work Based Learning. These are practical skills, underpinning knowledge and key skills.[http://www.thedataservice.org.uk/datadictionary/businessdefinitions/WBL.htm]
Personalisation
Just-in-time learning
Peer assessment
  • Self or Peer Assessment is the process of students or their peers grading assignments or tests based on a teacher’s benchmarks.[1] The reasons that teachers employ Self- and Peer-Assessment are that it will save them time, students may gain a better understanding of the material, and student’s metacognitive skills may increase. Rubrics are often used in conjunction with Self- and Peer-Assessment.[2][http://en.wikipedia.org/wiki/Self-_and_Peer-Assessment]
  • Student assessment of other students' work, both formative and summative, has many potential benefits to learning for the assessor and the assessee. It encourages student autonomy and higher order thinking skills. Its weaknesses can be avoided with anonymity, multiple assessors, and tutor moderation. With large numbers of students the management of peer assessment can be assisted by Internet technology. Peer assessment is assessment of students by other students, both formative reviews to provide feedback and summative grading. Peer assessment is one form of innovative assessment (Mowl, 1996, McDowell and Mowl, 1996), which aims to improve the quality of learning and empower learners, where traditional forms can by-pass learners' needs. It can include student involvement not only in the final judgements made of student work but also in the prior setting of criteria and the selection of evidence of achievement (Biggs, 1999, Brown, Rust and Gibbs, 1994).[http://www.keele.ac.uk/depts/aa/landt/lt/docs/bostock_peer_assessment.htm]
Collaborative learning
  • Collaborative learning is a situation in which two or more people learn or attempt to learn something together.[1] More specifically, collaborative learning is based on the model that knowledge can be created within a population where members actively interact by sharing experiences and take on asymmetry roles.[2] Collaborative learning refers to methodologies and environments in which learners engage in a common task where each individual depends on and is accountable to each other. [http://en.wikipedia.org/wiki/Collaborative_learning]
  • "collaborative learning" refers to an instruction method in which learners at various performance levels work together in small groups toward a common goal. The learners are responsible for one another's learning as well as their own. Thus, the success of one learner helps other students to be successful. Proponents of collaborative learning claim that the active exchange of ideas within small groups not only increases interest among the participants but also promotes critical thinking.There is persuasive evidence that cooperative teams achieve at higher levels of thought and retain information longer than learners who work quietly as individuals. The shared learning gives leanres an opportunity to engage in discussion, take responsibility for their own learning, and thus become critical thinkers.[http://www.gdrc.org/kmgmt/c-learn/index.html]
Learning objects
  • A reusable, media-independent collection of information used as a modular building block for e-learning content. Learning objects are most effective when organized by a meta data classification system and stored in a data repository such as an LCMS.[http://www3.imperial.ac.uk/ict/services/teachingandresearchservices/elearning/abo utelearning/elearningglossary#f]
  • Learning objects (sometimes called 'reusable learning objects' or RLOs) are small, self-contained packets of digital content, typically only 2 to 15 minutes in duration. Learning objects can be aggregated into courses, tagged with descriptive metadata (allowing search engines to find them) and can communicate with a learning management system (LMS), typically using SCORM.[http://www.elearningnetwork.org/wiki/learning-objects]
  • "a collection of content items, practice items, and assessment items that are combined based on a single learning objective" [1]. They will typically have a number of different components, which range from descriptive data to information about rights and educational level. At their core, however, will be instructional content, practice, and assessment. A key issue is the use of metadata. Learning object design raises issues of portability, and of the object's relation to a broader learning management system.[http://en.wikipedia.org/wiki/Learning_Objects]
e-assessment
  • e-assessment is the use of information technology for any assessment-related activity. This definition embraces a wide range of student activity ranging from the use of a word processor to on-screen testing. Due to its obvious similarity to e-learning, the term e-assessment is becoming widely used as a generic term to describe the use of computers within the assessment process. Specific types of e-assessment include computerized adaptive testing and computerized classification testing. E-assessment can be used to assess cognitive and abilities. Cognitive abilities are assessed using e-testing software; practical abilities are assessed using e-portfolios or simulation software.[http://en.wikipedia.org/wiki/E-assessment] 
  • e-assessment is the use of computers and computer software to evaluate skills and knowledge in a certain area. It can range from on screen testing systems that automatically mark learners' tests (often providing almost instant feedback), to electronic portfolios where learners' work can be stored and marked. Both e-Assessment and e-Portfolios are becoming a fundamental part of modern education. They are essential for personalised learning providing benefits for learners, teachers and those involved with the administration of assessment within schools, colleges and training providers.[http://www.ocr.org.uk/eassessment/index.html] 
  • Technology can support nearly every aspect of assessment in one way or another, from the administration of individual tests and assignments to the management of assessment across a faculty or institution; from automatically marked on-screen tests to tools to support human marking and feedback. Clearly, though, for technology-enhanced assessment to be effective, pedagogically sound developments need to be supported by robust and appropriate technology, within a supportive institutional or departmental context. 'Technology-enhanced assessment' refers to the wide range of ways in which technology can be used to support assessment and feedback. It includes on-screen assessment, often called e-assessment. [http://www.jisc.ac.uk/whatwedo/programmes/elearning/assessment.aspx]

Discussing true innovation in elearning

This first week's course forum discussions have had a view to moving closer to an understanding of the concept of innovation in elearning. There have been many interesting points raised and questioning of whether something is or is not innovative. This post tries to draw together a few threads from our discussions which seem to identify criteria or elements of what members of my tutor group considers to be true innovation in elearning.
  • ‘innovation’ in the context of e-learning involves a sense that something may be achieved using that technology that otherwise would not or that the outcome is materially improved using e-technology
  • one of the great benefits is the opportunity for interactivity - innovation in elearning appears not to be static and monolithic
  • innovation by definition in Wikipedia is :"the process that renews something that exists and not, as is commonly assumed, the introduction of something new."
  • it's really important that elearning innovation delivers measurable benefits
  • there is a question of whether the technology made the difference or whether having the technology facilitated the use of different methods. Innovation is the realisation of the idea, and technology could support that
  • innovations in e-learning seem to have shifted the pattern of learning from centralised class room activity to a collaborative yet de-centralised nature
  • innovation in elearning is enabling something different or materially better to occur that cannot happen using existing methods
  •  innovation can be equated with progress, improvement, and with something that is new (in either revolutionary or evolutionary ways). 
  • innovation has the potential to change things for the better, although this may not always be achieved
  • innovation does not necessarily mean something completely new, often it refers to a new use or outcome from an existing (but differently used) method or technology
Colin King found an article on technology for technology's sake, and suggested that its conclusion may be a clue as to how we should approach technology in learning. I think it's a perfect way to sum up this discussion:
"The truth of the matter is that technology is neither the problem nor the solution, it cannot be blamed for what we do with it. It is the way we actively choose to apply it that matters."
Small print:
This post draws heavily on the contributions of tutor group members to our discussion forum. However, as that is a private student-only area I have not added names to suggestions above, but have attempted to paraphrase and draw together themes on which there was agreement across participants. Full credit and references will of course be given in any course assignments.

Wednesday 9 February 2011

Beginning to explore innovation in elearning

Week 1, activity 3 is beginning to explore what we mean by innovation in elearning. This post summarises and records some thoughts relating to papers on the subject.

Firstly, ‘New technology in learning: a decade’s experience in a business school’ by Rich and Holtham (2005).

This paper describes the introduction of innovative uses of IT into campus-based MBA courses in 1992 - almost 20 years ago. At this point home and business computing was minimal and email was only just beginning to be explored within the academic network. It doesn't go on to describe changes over the decades since, but it's a short paper, so forgiven!

The MBA programme used email based activities with the aim of adding value to their existing process, particularly through fostering effective group work. Valuable opportunities for international collaboration via email were noted, and email was used for simulations in which students could practise management skills. The paper is brief and doesn’t state in detail how value of the interventions was measured.

It is noted that there was reluctance both amongst some faculty and some students. Some felt that IT should be taught as a practical skill or separate topic, and didn’t appreciate its value as a learning tool. Student’s initial computer literacy was variable, and the need to learn basic computer skills obscured further learning outcomes for some.

The paper reports that changes to teaching/learning were achieved by using IT “as an adjunct to existing channels for instruction”. Given the unfamiliarity with technology of many of the students, I wonder if this use of IT as an additional rather than sole medium made the technology more acceptable, appealing and less daunting to use. Perhaps this is a note to not go overboard with innovation, but to ensure that students are guided in their development, rather than thrown in at the deep end?

Students are described as having participate in a “dynamic knowledge network... which was used to encourage professional knowledge creation... exploiting active learning and knowledge building...” These seem to me to be very common aims or uses of ‘innovative’ elearning, i.e. the development of community, social and connectivist learning and knowledge creation and sharing with varying degrees of collaboration. These can be aims of non-e-learning too, so I’m wondering about what additional or further goals innovative elearning might be able to achieve.

The paper concludes that a small group of faculty with only limited technical resources were able to produce innovative, pedagogically sound material using ICT. I wonder if if the group had in fact been larger or had wider institutional support, whether this might have hindered their ability to be flexible and creative in their learning design. Or perhaps, conversely, did the lack of wider support mean that this was a limited experiment? We’re not told about subsequent elearning innovations trialled by this particular institution, and clearly the growing uptake of elearning suggests that lack of support is not excessively stifling. However, a number of student colleagues on H807 have expressed frustration at their own institutions’ reticence to support or adopt innovation. It seems that a certain critical mass might be necessary to get early innovation going, but that attaining wider acceptance is much harder, and indeed may limit the level of innovation which can be achieved within an institution.

Take away points/thoughts:
  • Despite a massive growth in familiarity with IT in past 20 years, we should not assume that technological innovations (even with pedagogical value) will be universally appreciated by students/educators.
  • The need to acquire technical skills could obscure other learning objectives if computer (and wider technology) literacy is assumed or under-supported.
  • Innovation in elearning needs sufficient technical support and positive attitudes towards adoption, but not stifling by large bureaucratic institutions.
  • The paper didn’t talk about how success of the innovation was measured. Measuring what is innovation and what makes a successful innovation are areas for more thought!

The second paper was ‘Innovative teaching: sharing expertise through videoconferencing’ by Lück and Laurence (2005).

This is a 2005 paper referring to a process of developing videoconferencing to support lectures by internationally-absent lecturers to a tourism degree class in 2003. I was somewhat surprised to see the substantial effort which was expended on developing the technology - this wasn’t the case of adopting a commercial off-the-shelf solution and applying it to the interactive lecture scenario, as I would have assumed such technology might have been available in 2003.

The aim of the work was to provide a technical solution which permit guest lecturers, possibly based on another continent to lecture to a local class. The desire was to enhance student knowledge through exposing them to the perspectives of international experts with whom they would otherwise be unlikely to be able to interact. It was this contact with “out-of-class sources” which students rated particularly positively.

The paper also cites the principles of “good practice in undergraduate teaching” by Chickering and Ehrmann (1996), including:
  • encouragement of contacts between instructor and students,
  • involvement of active learning
  • respect for diverse ways of learning
  • prompt feedback.
These were written pre-widescale adoption of elearning, but still provide useful guidance. I was somewhat surprised by the argument that the ‘prompt feedback’ principle was fulfilled by asking students to complete evaluation forms after the event; I was expecting this type of principle to relate to providing feedback to the student on their learning - not an elearning or innovation issue though!

The authors argue that videoconferencing offers new possibilities to higher education, predominantly through the extended opportunities for collaboration. They note that they would prefer not to view videoconferencing as a replacement for face-to-face class time, but as an event integrated into the class syllabus. This seems to echo the argument for elearningHoltham.

Again, this paper was about the use of technology to expand knowledge networks - an apparently emerging theme for H807. The aspect I found hard to believe was that video-conferencing was particularly innovative in 2003. However, it was clearly a novel, engaging and inspiring experience for both the students and lecturers involved in this project, and therefore, for them, innovation in elearning. This raises a new (still ill-formed) thought for me - that innovation is relative to its context. What is new and innovative for one user or user community may be old hat to another. Applying a technology or method in that scenario is innovative, even if not a brand new idea which would be globally ‘innovative’. So, some aspects of innovation can involve re-use, or re-application or techniques. Innovation doesn’t require re-inventing the wheel.

Take-away points/thoughts:
  • Innovation may be through technology development, or through the application of technology in some new way
  • An activity or use of technology may be innovative in one context but not in another
  • Students valued opportunities for interaction particularly highly
  • Providing a technology, even though it may offer a valuable opportunity (e.g. a technological solution to interact with someone geographically distant), does not necessarily mean that the activity will be easy. Every day social influences such as feeling intimidated by a senior/expert, conventions around group interaction, etc will also play a part.

References:
    Rich, M. and Holtham, C. (2005) ‘New technology in learning: a decade’s experience in a business school’, British Journal of Educational Technology – Special Issue on Innovation in Elearning, vol.36, no.4, pp.677–679 http://libezproxy.open.ac.uk/login?url=http://dx.doi.org/10.1111/j.1467-8535.2005.00545.x

    Lück, M. and Laurence, G.M. (2005) ‘Innovative teaching: sharing expertise through videoconferencing’, Innovate, vol.2, no.1 [online] http://www.innovateonline.info/pdf/vol2_issue1/Innovative_Teaching-__Sharing_Expertise_through_Videoconferencing.pdf

    15 minutes of thinking...

    A gentle start on the course theme for week 1, activity 3 - jotting down a few initial ideas of your own about innovation in elearning... here's what I thought.

    Does just doing the same thing but with new technology mean it’s innovative? I think not - technology per se does not equal innovation, but it may lead to or facilitate it. For example, putting a spelling test online doesn’t make it innovative, but offering learners a context in which to understand their results, opportunities to practise using the words, links to example uses, ways to document their progress, or reflect on what they’ve learnt might make it innovative. There seems to be a rush to add an e- or i- prefix, but that certainly doesn't automatically make something innovative.

    Lots of ‘innovative’, or a least new or re-thought ways of learning seem to involve social aspects of learning - communication, collaboration, development of communities of practice. These aren’t innovative in themselves - we have long encouraged them by teaching students in classes rather than as individuals and by gathering together in university departments rather than writing theses alone. What the technology has facilitated is predominantly the opportunity to access a far wider audience/community, rapid publishing and feedback, access to resources and peers and greater opportunity for co-creation and sharing of knowledge.

    Does something have to be truly iconoclastic in order to be innovative? My arguments that ‘just doing it online doesn’t make it innovative’ might seem to suggest so. I'm reminded a bit of Dillenbourg (1992) 'The Computer as Constructorium'. Dillenbourg described an iconoclastic goal of breaking the learner’s passive model of learning. He noted that becoming aware of one’s own knowledge, i.e. the ‘reflection’ process, is of interest not only to researchers on metacognition, but also now to designers of educational computing systems. Dillenbourg wasn't only interested in elearning, but I still feel that elearning that can meet his goal would quite probably be innovative - even today when you might hope our expectations for computer supported learning have moved on.

    What about measuring innovation? Do you need to measure how innovative it is? (and can you anyway?!) We do need to be able to compare effectiveness with existing methods - there is no point being innovative if it doesn't improve something. However, the improvement doesn't necessarily have to be in outcomes - it could be in motivation, student retention, engagement, etc... some of which are tricky concepts to measure too.

    Reference:
    Dillenbourg P. (1992) The computer as a constructorium: Tools for observing one's own learning. In M.Elsom-Cook and R. Moyse (Eds), Knowledge Negotiation.(pp. 185-198) London: Academic Press. http://tecfa.unige.ch/tecfa/publicat/dil-papers-2/Dil.7.1.2.pdf

    Saturday 5 February 2011

    Getting started with Twitter

    It's been really nice to see some early engagement between course participants using Twitter. I'm looking forward to the extra channel it will provide for H807.

    That said, I've also been recalling how I felt about Twitter before I did H808. I really didn't entirely get what it was all about. After I'd got to understand it a bit I thought I'd write something which might be helpful for other OU new Twitterers. So, what follows is something I posted in a course forum back in September 2009, but I hope it's still relevant for anyone getting started.


    I've recently been dabbling with Twitter, and think I may be beginning to be converted from really not 'getting it', to maybe having an inkling of what it's all about. There are plenty out there who believe Twitter is great for learning... so I want to see what I'm missing out on.

    I thought I'd post some basics, and a few observations here, and invite the more active and experienced Twitterers of H808 [and now H807] to help explore its potential and suggest why it might be a valuable elearning tool.

    The Basics
    • Twitter is a 'micro-blogging' tool - you make posts (called 'Tweets') with a maximum of 140 characters in length
    • Like everything else 'web', you create an account - from here you can post your tweets and follow those of other Twitterers
    • Twitter suggests you use your tweets to answer the question 'What are you doing?', but reading posts such as "I'm eating a cheese sandwich" is dull (and possibly not very educational!) and so many Twitterers post whatever they like
    • You can follow other Twitterers - this means that any tweets they make are added to your Twitter home page

    Taking it a little further
    • You can search Twitter - for whatever you like. It'll return tweets with your search term in (obviously!)
    • You can add hashtags to a word in your tweet, e.g. "I'm studying ePortfolios for #H808 at the moment". This makes it easy to create communities of people interested in the same topic by making it easier for them to find and share info on their topic
    • You can reply to another user's tweet, or address a tweet to them by including @ in front of their username, e.g. "Great presentation by @davidjones at Serious Games conference today"
    • You can 're-tweet', i.e. duplicating/forwarding someone else's tweet, web link or blog post - and it's not cheating! In fact, having your thoughts or comments re-tweeted is a compliment and social currency. Just include RT at the start of your post to indicate that it's a re-tweet
    • All usernames and hashtags in tweets become automatically hyperlinked - so if someone tweets "RT @rob_roy blogging on #eLearning critique", I can click on rob_roy to see all of that user's tweets, or on #eLearning, to see a list of tweets including that tag. This also makes it really quick to move around a community and get introduced to new Twitterers
    • You can add links to your tweets to pass on items which have caught your interest. It's good practice to use tinyurl (or similar) to reduce the length of the URL you paste in.

    As I said, I'm pretty new to Twitter. When I had to visit the Twitter web site to see people's tweets, I didn't get round to it much. Similarly, when I only followed one or two friends there wasn't much point - I got more info about them from their blogs, Facebook or other streams. I really couldn't see why it was any use for education. It was 3 things that helped me to start thinking differently:
    1. understanding hashtags and search - allowing me to quickly find tweets that are on topics of interest
    2. getting the TwitterFox [now re-named Echofon] add-on for Firefox, meaning I get updated tweets from those that I follow automatically delivered to my browser - I don't have to go to the Twitter home page (and I don't have an internet phone). There are all sorts of other similar gadgets out there - others with more knowledge will be able to advise better than I.
    3. finding Carol Cooper-Taylor's eLearning blog posting on 50 ideas for using twitter for education

    I hope this might encourage anyone who is reluctant about Twitter, as I was. Now, anyone who's already using Twitter have any thoughts on its educational uses? Or on its uses for helping us in studying H808[and H807]?