The line-up for this e-learning event in London on 15 December includes a good number of influential people and interesting speakers such as Diana Laurillard and Donald Clark (plus a few who I feel deserve less influence) — Laurillard will be presenting the DfES's unified e-learning Strategy consultation document.
It looks good value at £110 (or £90 for small organisations and freelancers). If you plan to go, let me know and we can say hello...
Why do resources become more reference-worthy when other people refer to them? I suppose it's another of those success-breeds-success network effects. So it's only now that Stephen Downes has seen fit to comment on it, that I get round to referencing a document that Seb Schmoller compiled of contributions from me and other e-learning professionals.
Back in September, Seb asked seven of us for 50-100 words in answer/reaction/response to the question, "Embedding the skills to teach online - is it technical or personality (sic) skills that are needed?" Ever the contrary one, the bulk of my response was graphical. You can download the four-page PDF file with the collated responses via the link to Stephen Downes' comment above or via this page (under the title "Paper about online tutoring skills used by Seb Schmoller at South West of England e-Learning Conference").
MusicAlly publishes a fortnightly trade report on the impact of digital technologies on the music business, with the emphasis on business. The free sample issue is more concerned with intellectual property rights, licensing and "what's to be done about file sharing" than about music per se. The most interesting and music-related features for me are those on whether ringtones are really as important in the market as singles (Musically says not — phew); and about how a niche band like Phish is demonstrating that actually you can make money by selling downloads (in this case "official bootlegs" with no copy protection), with its profitable LivePhishDownloads site.
Being cheeky, it's hard to resist pointing out that the 15 page £40 newsletter offers a lot fewer bytes/£ than your average CD — and no studio costs — but is probably still better value than consulting a lawyer who'd charge a similar amount for picking up the phone and breathing down it once. For only £20 you can pick the brains of MusicAlly's Paul Brindley and two other industry insiders in Soho next Monday (event details and, after 24 Nov, report) , but unfortunately I'm on a course on Monday evening.
I'm part of a team that is starting an assignment to develop and pilot an accreditation framework for learning technologists. The work has been commissioned by the Association for Learning Technology.
Our team includes people with good learning technology contacts in Higher Education, Further Education and commercial sector: David Kay and I are covering the latter. We'll be researching and making contact with many of the bodies that have already done work in this area, or otherwise have a stake in it (e.g. the Institute for IT Training, The E-learning Network, The Forum for Technology in Training, and the CIPD). But if you're working in e-learning/learning technology in the private sector and would like to have an input to what we develop, please get in touch as soon as possible, either by adding a comment to this posting, or by contacting me privately.
Just confirmed today: this is where I'll be living and working in a few weeks' time (once I've got all the necessary communications installed). My new address will be 14 Chequer Court, 3 Chequer Street, London EC1Y 8PW — it's just 150m from where William Blake is buried (map), but until the final move date is confirmed, please keep using my current contact details. (Photograph © Urban Spaces, 2003.)
For the last few Saturdays I've been doing Thomas Gardner's Sonic Arts course at the Mary Ward Centre. Today we moved from theory to practice, and started experimenting with applying various effects to soundscape recordings we'd made, using Max/MSP for the treatments.
There will be a performance of our collective composition, using live manipulations of our recordings, spoken text, and possibly my cracklebox, on 13 December. I quite like the recordings I made in their untreated state, so I've made mp3 versions of the sound of refuse collection outside my window — which I often get twice a day on weekdays — and a snippet of a journey on a Thameslink train (each is 1 minute long, and just under 1.2 MB).Continue reading "Larking about with Sonic Art"
Prompted by David Rieff's article in October's issue of Prospect magazine and his talk at the RSA this last week — both on the subject of an alleged crisis of legitimacy in the United Nations — here are a few notes on transnational institutions and the redefinition of state sovereignty.Continue reading "Transnational Institutions and Sovereignty"
I'm just back from the UK launch of the Intelligent Street installation at University of Westminster's Harrow Campus. I believe it should be running there for some time (months, or even a year) — though security is tight at the campus so it's not easy just to drop in.
The concept is fairly simple: music is played in a public space and the people in that space can send one-word SMS commands via their phones, such as 'dark,' 'cheesy,' 'energise,' and the music adapts accordingly. The music is produced on four channels using SuperCollider, and it can adapt to combinations of up to ten commands at a time (so, using the commands given above, you hear music that is dark, cheesy, and energetic all at once).
Intelligent Street is probably best seen more as R&D than art, but as project team member Mark d'Inverno suggested to me, it could also have educational applications, such as drawing schoolchildren in to understanding what qualities in music make it dark, cheesy or energetic.
The Starfire video prototype, produced by Bruce Tognazzini and colleagues at Sun Microsystems, was explicitly conceived as a kind of response to Apple's Knowledge Navigator (described in my posting a couple of days ago).
But Starfire set itself a harder challenge by focusing on a scenario exactly ten years in the future (Knowledge Navigator was set some unspecified date at least twenty years in its future). Nine of those ten years have now elapsed, so the chickens are on their way home to roost. Starfire shows an even clearer case study of social and economic factors having hard-to-predict but easy-to-underestimate influence in shaping the development of user interface technologies.Continue reading "#2 Past Projections of Future User Interfaces"
When I wrote about the Tate Galleries' e-learning resources a couple of months ago I said I didn't know of any major arts/culture organisations offering full accredited courses by e-learning.
Since then the Tate has announced details of two new online courses. The Level 1 course is free and starts in January 2004. It looks as thought it will simply offer unsupported, self-managed learning using online materials. But the Level 2 course, available from next October, is a more serious affair, including tutor support and online discussion facilities for groups of up to ten learners.
In 1987 Apple produced the Knowledge Navigator video, which presented in scenario form the kind of user interface that they thought knowledge workers would be using twenty or more years in the future. Over the last week there's been considerable interest raised by Jon Udell's revisiting of that video, and his review of how accurate its projections were.
My feeling is that Udell's assessment is sometimes a bit generous in his assessment of what progress has been achieved. Comparing Knowledge Navigator with Bruce Tognazzini's sister video, Starfire — which projected only ten years from 1994 to 2004 — shows how many of the projections of the past turn out to have been over-optimistic.Continue reading "#1 Past Projections of Future User Interfaces"
Today I dug out an old academic-style paper I wrote in 1992, On the Definition and Desirability of Autonomous User Agents in CSCW, and put a web version in my archives section. (CSCW stands for Computer-Supported Co-operative Work.)
I think the paper still stands up fairly well as a critique of the idea that our computers will one day have faces and talk back to us (and each other) as though they were independent, anthropomorphic beings. Computers that appear like people are still a novelty item — see Ananova, the virtual newscaster for example — which thankfully behave with none of the autonomy that characterises real people. Since 1992 research has progressed on software agents, which carry out some tasks quasi-autonomously using artificial intelligence techniques, but a review of the MIT Software Agents Group's list of projects shows that these are rarely if ever presented as autonomous beings on the user interface.
The trigger that prompted me to re-visit this old paper will be made clear in my next posting.