When introducing this event on Learning Metrics last week, Roger Broadie, Chief Executive of the European Education Partnership, suggested that academic researchers have been unwilling to draw together a common base of learning theory on which to build measures of learning. Meanwhile practitioners have a job to do and have to get stuck in and measure what's going on.
By the end of the event, the conclusion suggested by the examples presented was that there will probably never be a single unified theory of learning, and that a horses-for-courses approach that adapts learning metrics to circumstances will continue to be the most practical and useful. Moreover all the interesting measures depend on judgements that are subject to interpretation: there are no fixed references or quantifications that, on their own, tell you anything very interesting.
The majority of the presentations focused on e-learning — or at least learning supported somehow by ICT systems — in schools, where there are established practices for measuring progress.
Graham Taylor, from Sawtry Community College, presented research he'd led based on experience with Microsoft's Anytime Anywhere Learning initiative, which gives school-kids access to laptops with wireless net access, if they want it. Taylor's metrics were based on an adapted version of Bloom's taxonomy of education objectives, and he determined that half of these levels should be classified as 'higher' than the others (the evidence to justify this was not clear). He found that children with laptops spent more time at higher levels (44% vs 14%). Those without sometimes reached same levels, but less systematically and less often. One ready-to-hand interpretation of these results was that the laptops enabled children to do rote tasks (e.g. copying) more quickly, leaving more time for higher-order analytical tasks.
Children with laptops also showed higher motivation for learning, and this applied to both boys and girls. This could be because they feel more engaged with their activities through the medium of technology. Or it could be that they're just getting a buzz off having a sleek new toy that makes school seem like less of a drudge — an effect that will wear off with time and increasing use of such equipment. The head of a posh New York school that's using iPods for learning pretty well admits as much in this New York Times article (requires free registration).
James Blomfield of Intuitive Media described an evaluation of the learning that takes place in their Grid Club service for school-children, undertaken by Manchester Metropolitan University. Grid Club enables children aged 7-11 to play games, take part in forums, create newsletters and web sites. Six measurement dimensions were used for the evaluation:
Observing interactions on Grid Club, the evaluators were able to find evidence of children showing commitment to learning, willingness to take responsibility, and having positive peer interactions, for example. These are broad measures, which require some subjective interpretation to assess. It's hard to imagine any semi-controlled environment for interaction between children which would not produce evidence of some of these outcomes.
By comparison with the school-based case studies, Ben Gammon's presentation was interesting in focusing on the less structured context of learning in museums, and also focusing partly on measuring failure to learn. Ben is Head of Learning and Audience Development at the Science Museum.
Ben observed that asking museum visitors if they learnt anything often gets a misleading answer. Many visitors associate 'learning' with drudgery and digestion of brute facts, so they assume that discovery, activities and new perspectives do not count as learning. As with the school cases, Ben's solution was based on model of learning that — in including cognitive, affective, skills, social, personal dimensions — draws on an amalgam of previous theories.
Again the Science Museum took a qualitative, interpretative — and thus labour-intensive — approach to measuring visitor behaviour against these dimensions. Their methodology asked questions like: Do people complete the activity? What's their body language? What conversations do they have? What reactions do they report? They measured their expectations against expectations of what they would see if learning was taking place.
As Ben put it, "If the only thing you measure is success, then that is all you will see". You can list all the battles you've won, but you may still lose the war. The museum's approach therefore aimed to measure failure as well: to identify people who leave more confused or frustrated than when they arrived. Through this they aimed to identify physical, intellectual or motivational barriers to learning.
Interestingly, Ben made no reference to the Museums, Libraries and Archives Council's Inspiring Learning for All framework, though I suspect much of what he described could have been adapted to fit within this framework. Since I first wrote about this framework, I've changed my perspective on it, realising its purpose is less to innovate in measuring learning, and more to provide a flexible scheme that can cover a wide range of contexts.
Chris Yapp focused on metrics for organisational learning, drawing on work he did at the National Council for Educational Technology (now Becta) in the early '90s. His framework was based on Venkatraman's work at MIT, which proposed five levels of organisational transformation based on use of ICT systems:
The first two of these levels are 'evolutionary' whereas subsequent ones are 'revolutionary'. It wasn't clear to me from Chris's presentation how he meant to measure where an organisation's use of ICT systems falls on this spectrum.
Chris took up a job as Head of Public Sector Innovation at Microsoft last year, so I guess this is close to his heart. Having worked as an in-house consultant on business process redesign in the public sector myself ten years ago, all I can say is 'good luck'.
What emerges from these and other presentations is that different players clearly have different interests in learning metrics, and it's not practical to expect one set of measures to suit everyone's purposes. For example, Ben Gammon argued that funders often ask his museum to measure unhelpful things (often demonstrating facts of limited utility, such as that more visitors walk through the exhibition halls near the museum entrance, compared with those further away).
In his introduction, Roger Broadie listed the stakeholders in learning metrics as educators, custodians of knowledge, funders, politicians, learning system developers, and learners themselves. While Phil Hemmings, from Research Machines, argued that measures are a 'good thing', which drive improvement, Peter O'Hagan (Serco) observed "you can't make a pig fatter by weighing it every day".
Phil went on to urge more measures that look at learning processes, rather than outputs. In this his purpose is to get more diagnostic information to improve learning effectiveness (rather than to have publishable benchmarks of progress, which I guess is one of the purposes of the Secretary of State for Education). At the same time, Phil observed that a well-known side-effect of measurement is that you get what you measure, and that you don't need to understand a process fully to improve it.
One thing I'd have liked to have seen from the event is a consideration of a wider range of learning contexts and a wider range of purposes for measurement. Almost all the speakers were either vendors of technology-based learning or professional advocates for it. This background must colour the kinds of measures they're interested in.
Also no-one addressed the different kinds of measures that should be applied to e-learning compared with classroom learning — a topic which this paper (requires free registration) addresses in some depth and with some vehemence.
Posted by David Jennings in section(s) E-learning on 13 December 02004 | TrackBackThe NMK web site now has its own report of the event.
Posted by: David Jennings on 15 December 02004 at 3:28 PMGraham Taylor, from Sawtry Community College, presented research he'd led based on experience with Microsoft's Anytime Anywhere Learning initiative, which gives school-kids access to laptops with wireless net access, if they want it. Taylor's metrics were based on an adapted version of Bloom's taxonomy of education objectives
where can i read a copy of this particular research?
I suggest that you contact Sawtry Community College and ask Graham Taylor directly for a copy (I'm sorry, I do not have his personal contact details).
Posted by: David Jennings on 27 November 02005 at 10:46 AM