Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

MOOC Completion...according to whom?


The other day I had an interesting (but brief exchange) with Kelvin Bentley on twitter about MOOC completion.  This isn't really a topic that I come back to often, given that completion-rates for MOOCs, as a topic, seems to have kind of died down, but it is fun to come back to it. To my knowledge, no one has come up with some sort of taxonomy of the different degrees of completion of a MOOC†.

But let me rewind for a second.  How did we get to the topic of MOOC completion?  Well, I've been attempting to make my extended CV more accessible (to me).  In the past, I used a WYSIWYG HTML publishing platform to manage my extended CV‡.  The idea was that I could easily export it and just push it on the web.  In practice, I never did this, and when I changed computers it became a hassle to maintain. So, I moved everything over to google docs for cleanup (and easier updates).  In cleaning up my CV sections (I am not done, btw!), I did make a startling self-discovery. In the time-period 2013-2016, I binged on a lot of xMOOCs!😅  The most notable platforms were Coursera, Edx, Udacity, but there were others such as the now-defunct Janux (Oklahoma University) and Open2Study (Australia Open University), as well as overseas platforms like MiriadaX and FutureLearn.  In the time period 2011-2012 I didn't have a lot of MOOCs, mostly because during this period a lot were cMOOCs and xMOOCs hadn't really spread like wildfire.

This realization now begs the question: "How many did you complete?" (and you guessed it, Kelvin asked it...).  My answer comes in the form of a question "based on whose metrics and measures?".  When you sign up for a paid course (e.g., professional development seminar, college course, certification prep course, etc.) I think that there is an unspoken assumption that the goals of the course mirror, to a greater or lesser extent, the goals of the learner♠.  Can this assumption be something that transfers over into the world of a free MOOC?  I personally don't think so.  I've long said that the course completion metric (as measured by completing all assignments with a passing grade) is a poor metric.  One very obvious reason to me was that people simply window-shop; and since there is no disincentive to unenroll, people don't take that extra step to leave the course formally, as they would with a paid course where they could receive a refund. I've been saying this since xMOOC completion rates were touted as an issue, but few people listened. Luckily it seems that people are changing their minds about that (or just don't care 😜). I guess George Siemen's advice to Dave Cormier holds true for my own rantings and ravings: publish those thoughts in a peer-reviewed journal or they don't exist 🤪 (paraphrased from a recent podcast interview with Dave).

Assuming that we exclude window-shoppers from our list of completion categories♣, what remains?  Well, instead of thinking of distinct categories (which might give us a giant list), let's think of completion in terms of whose perspective we are examining.  On the one extreme, we have the learner's perspective.  The extreme learner's perspective is characterized by total control by the learner as to what the goals are. In this perspective, the learner can be in a course and complete a certain percentage of what's there and still consider the course as done. Why?  The learner might have prior knowledge, and what they are looking for is to supplement what they already know without going through the hoops of any or all assessments in the course. They've probably evaluated the materials in the course, but if they already know something, why spent a lot of time on something already known? Or, an item that should be done to obtain 100% completion is only available in the paid version (some FutureLearn courses are like this), and are inaccessible to learners on the free tier.

On the other extreme, we have the perspective of the course designer. This is the perspective that most research studies on completion seem to adopt. The course designer is working with an abstracted learner population, with abstracted goals.  The outcomes of the course might be based on actual research into a learner group, they might be based on the intuition of the course designer, or they might just be whatever the course designer has an interest in preparing (sort of like the Chef's soup of the day, it's there, you can have it, but it doesn't mean that this is what you came into the restaurant for).  In a traditional course (the ones you pay and get credentialed for) it makes sense that a learner could simply go along for the (educational) ride because they are paying and (presumably) they've done some research about the course, and it meets their goals. In a free offering, why would a learner conform to the designer's assumptions as to what the learner needs? Especially when a free offering can (and probably does) gather the interest of not just aspiring professionals, but people in the profession (who presumably have some additional or previous knowledge), as well as hobbyists who are free-range learning?

Given those two extremes of the spectrum, I would say that there is a mid-point.  The mid-point is where the power dynamic between the learner and the designer is at equilibrium.  The educational goals (and what hoops the learner is willing to jump through) 100% coincide with what the designer designed. Both parties are entering the teaching/learning relationship on equal footing.  If you lean over a little to one side (learner side), the designer might consider the course incomplete, and if you lean over to the other side (the designer side) the learner might start to feel a bit annoyed because they have to jump through hoops that they feel are not worth their while. Some might begrudgingly do it, others not, it really depends on what the carrot is at the end of that hoop.  For me, a free certificate or badge did the trick most times. The threat of being marked as a non-completer (or more recently the threat of losing access to the course altogether 😭) however does not motivate me to "complete" the course on the designer's terms.

That said, what about my experience?  Well... my own behaviors have changed a bit over the years.  When xMOOCs first hit the scene I was willing to go through and jump through all the hoops for the official completion mark.  I did get a certificate at the end; and even though it didn't really carry much (or any?) weight, it was a nice memento of the learning experience. Badges were custom made (if there were badges), and the certificates were each unique to the MOOC that offered them.   Back in the day, Coursera had certificates of completion (you earned the minimum grade to pass), and certificates of completion with distinction (you basically earned an "A").  It was motivating to strive for that, even though it didn't mean much. It was also encouraging when MOOC content was available beyond the course's official end, so you could go back and review, re-experience, or even start a bit late.  As we know, things in the MOOC world changed over the years.  Certificates became something you had to pay for.  Sometimes even the assessment itself was something you had to pay for - you can see it in the MOOC but you can't access it.  Peer essay grading on coursera wasn't something that I found particularly useful, but I was willing to jump through the hoops if it meant a free moment at the end of the course (achievement, badge, certificate, whatever). Once things started having definitive start- and end- dates♪ , and content disappeared after that when certificates (which still we're worth much to the broader world) started costing money, the jumping through the same silly hoops (AES, CPR, MCEs, etc.) it just didn't feel worthwhile to go above my own learning goals and jump through someone else's hoops.

So, did I complete all those MOOCs?  Yup, but based on my own metrics, needs, and values.

What are your thoughts on MOOC completion?  Do you have a different scale? Or perhaps defined categories?

† There may be some article there somewhere that I've missed, but in my mission to read all of the MOOC literature that I can get access to, I haven't found anything.

‡ What's an extended CV?  It's something that contains everything and the kitchen sink.  That workshop I did back in 1999 for that defunct software?  Yup, that's there...because I did it, and I need a way to remember it. It's not necessarily about the individual workshops, but about the documenting of the learning journey.  The regular CV is somewhat cleaner.

♠ Maybe this assumption on my part is wrong, but I can't really picture very many reasons (other than "secret shopper") that someone would pay money to sign-up for a course that doesn't meet their goals.

♣ Window-shoppers I define as people who enroll to have a look around, but either have no specific educational goals they are trying to meet (e.g., lookie-loos), or have goals to meet, but they deem the MOOC to not meet them (e.g., "thanks, but not what I am looking for"). Either way, they don't learn anything from the content or peers in the MOOC, but at the same time, they don't unenroll since there is no incentive to do so (e.g., a refund of the course course).

♪ e.g., module tests deactivating after the week was over and you couldn't take them - AT ALL if you missed that window

A decade in review...onward to 2020!


I didn't quite expect this, but it seems like everywhere you turn you see "a decade in review" news stories (radio and TV), "the internet" (in general) and blog posts, twitter threads, and Instagram stories (more specifically).  I hadn't really thought about doing one of these posts, but what the hay, why not join in? 😜 . The last decade has certainly been eventful.  I kicked off the decade by completing my last 2 master's programs, changing jobs (3 departments and 4 titles in the last 10 years), starting to teach, and participating in research.

I absolutely loved Audrey Watter's 100 debacles of Ed-Tech, so I decided to pick a few and structure my post around this since most of these made an impact on my work-life, and some for my leisure. I am not going to pick through every one of those items, but I'll pick a few (and maybe add some of my own).

New Media Consortium (Horizon report #100)
This one was a shocker for me. The way the NMC just ceased to exist was something I'd expect only from a VC-funded start-up. In the last decade, I was able to attend both conferences that were offered in Boston by the NMC.  I enjoyed both, and I made quite a few interesting contacts via those conferences. I also used the Horizon Report as something in the courses I teach.  Not necessarily as something ultra definitive, but something to hone the critical skills of students in my courses (and have some fun prognosticating). While the Horizon Report has been picked up by Educause there is something distinctly different about the feel of Educause as compared to the NMC. Speaking of conferences that went bust:  Campus Technology.  I used to attend CampusTech every year.  It was held in Boston which made it super convenient, it had exhibition floor passes for free (which also meant that you could just attend the presentations if you snuck in), and they had a pretty liberal press pass policy which allowed me to attend for free as being affiliated with my school's paper, and later the CIEE journal. It was also co-located with AAEEBL which basically meant two conferences for one low price of free (for me).  Good times! They will be sorely missed.  I did learn a lot (even if you factor in the amount of hype).

Ning (#98)
Ning is something I came across while I was an MEd student in Instructional Design. Ning, along with SocialGO and Elgg are white label social networks which allow anyone to fairly easily build communities.  SocialGO was never free (it has a free trial), and Elgg is open source - which while great it does require the user to provide a fair amount of their own IT maintenance.  Not being in a position to do that, Ning hit the sweet spot of free basic hosting (up to 200 users for free?) and no server update and maintenance hassles.  Using Ning I built two networks, one for my MEd program (, and one for the Applied Linguistics Department (this is before I started working for them).  After Ning eliminated their free model I did garner enough support for UMassID for a few years, but each year I felt like I was looking for champions to pay for the $200 annual fee.  Most times I was successful, but at some point, I just felt like I didn't want to keep poking the champions for money any longer. We still use Ning for our department's portal, which makes it easy to post information for students, but also keep our alumni in the loop.  With all the changes happening in terms of who owns the platform, I fear that I might need to think of migrating at some point in the future.  I wonder how successful I would be in convincing my university to adopt Elgg, sort of like AU has with their Landing.

Badges (#86)
Open Badges are something that is was really pumped for. I am not sure I am all that disappointed that they haven't taken off like wildfire.  Any long-term change in credentialling does take time to have something accepted and endorsed. As a gamer, I liked badges because they are very much like achievements.  You can have smaller achievements to push you along, and you can have larger achievements (or stackable badges) that allow for much more descriptive information about what someone is capable of. Over the last 9 years we've had the Mozilla backpack,, credly, and Purdue's Passport; I am sure that there are more but those are the ones I've dabbled with.   Now, what I am disappointed in are two things:   (1) everything closing up, and (2) the fragility of badges.  Over the past year, Mozilla stepped out of this arena and migrated things to Badgr.  Credly is shuttering their free version (which allowed folks to create and distribute their own badges like I did for my classes) and replacing it with a paid version (Acclaim).  I was able to download all of my badges from my backpack and upload them to Badgr (which seems to have the capability to freely create badges), but this brings me to the second problem:  badge fragility.  A number of the badges I've earned over the past 8 years are not importable into my backpack because URLs are no longer accessible (and hence not verifiable).  Now, I know that I have those badges, I can post them on this blog or website, but it does pose a problem for the long-term viability of badges.  If someone wants to verify my diplomas they can contact the Registrar's office at my university and they can confirm that I've completed certain areas of study.  With badges, this is currently an issue.

Google Reader (#54)
**sigh** OK this still stings.  Damn you google! Google Reader wasn't just an RSS reader, it was a community.  I could subscribe to RSS feeds of my favorite blogs.  I could follow people from my contacts, and I could follow other people on Reader.  I could upvote RSS posts, and I could share and basically create an RSS feed of my shared items.  While I can (and do) use another reader now (Feedly) it's just not the same.  I end up sharing stuff I read on Feedly on my twitter accounts, but that ends up seeming like a lot of spam (because I read and share).  I feel like this change on the web has also made changes to sites.  Whereas in previous years with Reader I could get the entire news post in my feed now most sites give you a meager 3-5 line previous and you have to click to go to the site for the full thing.🙄.  There just isn't a satisfactory replacement for Reader.  The Old Reader is pretty close in terms of usability but it really lacks the network which made Google reader amazing!  Damn you google! 

Google Glass (#26)
In this past decade, I started traveling again after (what seemed to be) a long hiatus from such activities.  I love traveling. I love seeing new things and experiencing something different.  One of my travels brought me to Italy, to Herculaneum and Pompeii.  The experience was amazing, and I loved how preserved these cities were.  No matter the level of preservation they are still ruined.  As I was walking through the streets of these ancient cities I was thinking was an awesome use case for google glass.  One could use augmented reality as they walked through the streets to see buildings in their full glory. They could see Romans walking through the street while being busy with day to day life. They could hear the sounds of a dead language being spoken again all around them. Put in some QR codes and some geotagged locations and you've got educational pop-ups!  This sounded like a great vision of the future. I was thinking in terms of tourism, but it could easily apply to learning.  Sadly this is not the case...

MOOCs (#4)
Well, technically Audrey's #4 is the phrase "year of MOOCs", but I think that items like "promise of free" (#99) and "UC Berkeley Deletes Its Online Lectures" (#67) could fit in here. MOOCs may be seen as flops, and perhaps for some things they may be. But, as Siemens recently wrote in a twitter thread, MOOCs aren't out yet (OK, paraphrasing here).  Just because we (in North America) are "done" with them, it doesn't mean that others are done with them.  They may make a monumental comeback depending on who they evolve outside of our continent.  I still think there is a lot of promise for MOOCs (and I look forward to the regenesis of the cMOOC), but there are attributes of xMOOCs that really have bugged me over the past few years.  When there were only one or two MOOCs happening at a time, it was perfectly manageable for me (as a learner) to jump in and participate.  When options for providers and topics exploded, it became hard.  I chose which MOOCs to attend in real-time, and which ones to do as a self-paced learner.  Well, it seems like self-paced is not really much of an option the way that things have evolved in the xMOOC world any longer.  Once the course is over, unless you've paid for it, it becomes locked and unavailable.  xMOOCs have embraced a freemium model that takes away agency on the part of learners.   Now, I hope MOOCs survive because they are a part of my own balanced learning diet, but I do hope that providers and designers keep tweaking the recipe.  The freemium model doesn't really work all that well for things that you aren't credentialing people for.  Maybe in the coming decade, we'll see a resurgence of the cMOOC 😀.

I don't want to close out the blog post with only negatives, so I think I should mention some positives.  This past decade was about networks! Through networks, I "met" a lot of interesting and intelligent people who have positively impacted my life.  These are people I've learned within MOOCs.  These are people with whom I've conducted research.  These are people with whom I've virtually connected  (and in some cases even met face to face!).  These are people on twitter chats, on DMs, and at conferences. And also people in my doctoral cohort(s).  Even though technology might not always work for us, the people involved have made the last decade on the web a supportive learning environment for me, so thank you all for the MOOCs, the mLearning, the Rhizos, the lurking, the critical ID, the book and article recommendations, the conference crashing (like wedding crashers, but for conferences), the dissertation encouragement, and so much more! I hope the learning continues in the decades to come 😎


eLearning 3.0: How do I show my expertise?

With my dissertation proposal in the hands of my committee and off for review, I thought I'd participate in a MOOC while I wait to hear back.  Yes, I do have some articles that have piled up (which may be of use to my dissertation), but I thought I'd be a little more social (lurk a little, post a little).  The funny thing is that as soon as I lamented the lack of cMOOCs...there it was, eLearning 3.0 popped up on my twitter feed...and a few Greek colleagues invited me to one in a Moodle. I guess the universe provided for me.

Anyway - I had listened to both the intro video (week 0?) as well at the Downes & Siemens chat (Week 1 & 2) and I had jotted down a few things that piqued my interest...but of course I left them in the office. I guess I'll be blogging about those next week.  The freshest thing in my mind is the chat about xAPI and the LRS (Learning Records Store). In all honesty this went a little over my head. I think I need to read a little more about the xAPI and this whole ecosystem, but the LRS is described as enabling "modern tracking of a wide variety of learning experiences, which might include capturing real world activities, actions completed in mobile apps or even job performance. Data from these experiences is stored in the LRS and can be shared with other systems that offer advanced reporting or support adaptive learning experiences"

This got me thinking about the onus (read: hassle) of tracking down your learning experiences as a learner. I also credit a tweet I read this morning about credentialing, by Donna Lanclos, that really connects well with this. As a learner I don't really care about tracking my own learning experiences. I participate in a learning experience, be it a workshop, a webinar, a course of study, doing research on a paper to be published or presented, or even sustained interaction in a common topic across my PLN.  I enter the learner experience because there is something I want to learn. It can be a simple thing (e.g., how to  unscrew the case to my PC tower to install more RAM), or something more complicated (e.g., getting prepared for a social media strategy for your organization). Few people enter a learning experience just to get a credential†. However, it's the credential that opens doors, be they doors to a promotion, to a new job, or even an opportunity to be part of an exciting new project. So, it seems necessary that we, as learners and professionals, document all this in a way.  The problem is that it's a hassle. There are two big issues here:
(1) What to track (i.e., what's relevant)
(2) Where to track it?

Both issues, very predictably, are answered with "it depends".  What to track depends on the context. You can track everything, but not everything tracked is used in all potential instances where credentialing information is needed. For example, most common things tracked are your college degrees.  This is fairly easy to track because most of us have a small countable number of them (1-3 I'd estimate). However this doesn't necessarily show growth and increasing expertise as a professional.  So we delve deeper.  Just taking myself as an example here are some learning opportunities that I have been part of over the past few years (some offer certificates or badges, some do not):  MOOCs, week-long workshops, day long workshops, conferences, professional development webinars, self-paced elearning, required workshops on campus (e.g., campus compliance, purchasing, etc.), masters and doctoral degree programs, virtually connecting sessions, and so on. Each format is different.  Some have assessments, some do not. Some are mandatory, some are not. They all contribute to my knowledge of my field.

Tracking is another issue.  Where do I track things?  There are many places.  I have a resume - which is out of date, and I can't even find the word document any longer... I have a CV in Word format which I created this year for work purposes, there is LinkedIn, there is ORCID, and there are document repository networks like Mendeley, ResearchGate,, Scribd, and SlideShare; in addition to places where you can help folks with their questions, like Quora for instance. There is goodreads to track what you read. There are places to also track your digital badges, like the Open Badge backpack. I had once actually joined a free service, whose name escapes me at the moment, that was so granular that it could track articles you read - you tagged them with specifics (e.g., elearning, instructional design, online learning), and the service would add 'credit' to your profile for those things★.

So as to not belabor the point, over the years I've come across a variety of learning situations where I've had learning experiences.  Some with a nice shiny certificate at the end, others with just warm fuzzy feelings of accomplishment. How do we automate this multiple-in, multiple-out process so that we can actually track things with more precision, but also have the ability to spit out as many customizable reports as we can for credentialing purposes?  I don't know about you, but I find myself not having enough time to document everything, and I certainly don't keep things like CVs, resumes, and my LinkedIn profile updated frequently.  I think this will be one key challenge in eLearning 3.0.


† well, it's my hypothesis that most people enter a learning experience for the learning and not just the certificate/diploma/badge that comes at the end. I do know that there are people like that around, but I think they are not the majority.
★ Tracking every Chronicle and IHE article I read got tired pretty quick - I read too many articles in a day to really make manual input a feasible thing. I  dis-enrolled from that social service within a few days ;-)

Mentor-Teacher-Hybrid Presence-course design...


This semester is turning out to be one that is quite busy.  It was a good idea to not teach a graduate this semester so I can focus on my dissertation proposal, however (like that irresistible desert at the end of the meal) various collaborative projects have come in to fill the "void" left in my schedule from not teaching (the one that is supposed to be going into dissertation prep), and these projects have me thinking.

First is the aspect of Hybrid Presence.  Suzan and I coined this term to describe something between Teaching Presence and Learner Presence for the most recent Networked learning conference.  We are currently working more on this topic for an upcoming book chapter.

Second is gamification.  A term that has come in and out of my list of curiosities that I want to play around more with.  I've done some work on this for school, and for professional organization presentations, but nothing big in terms of an article (in my ALECS proposal it was only part of the ingredients).

Finally, since I am not teaching next spring (how much do you want to bet that other papers will fill in the void, LOL), I've been thinking about the summer I usually teach in the summers.    I facilitated the transition from "Introduction to Instructional Design" to "Foundations in Instructional Design and Learning Technology" - a small word change, but the connotations of such a change were profound for the course.  Rebecca H and I have taught variations of the course, as well as variations of INSDSG684. For the past 4 years I've wanted to gamify the learning experience, which I have partly done through badging, although that seems to not have caught on that greatly.  As an opt-in experience it varies a lot. This leaves me pondering: is it wise to move from the gamification end of the spectrum to full-on gaming in an introductory course?  If yes, how do you do it?  The boardgame metaphor appeals to me, but there are other metaphors that do as well!

On another strand, there are students in the MEd program that I teach in that are close to graduation and that I've had in my class at some point or another.  Now that they are a little further in the program I'd like to invite them back, for credit, to be part of the introductory course.  But not as teaching assistants.  I think that's a waste of their time and money. Rather, I want them to be mentors who are developing what Suzan and I term a Hybrid Presence.  I'll be be around to mentor the mentors (while working on my own Hybrid Presence) but I want to tease out how that would work as a for-credit course.  Since I only really teach two courses per year (limitations of employment), my current puzzle to solve is this: I want to combine the transformgameation† of the introductory course with this mentorship model I want to develop. This way I am working on a gamefied design that's (maybe) more interesting, and it won't bore the mentees since will be part of something new.

What do you think of this idea?

† word I invented, transform + game = transformgameation, tell it to your friends, let's have it catch on.


Getting beyond rigor

The other day I got access to my summer course on Blackboard.  With just under 25 days left to go until the start of courses, it's time to look at my old syllabus (from last summer), see what sorts of innovations my colleague (Rebecca) has in her version of the course, and decide how to update my own course.  I had some ideas last summer, but since then the course has actually received an update by means of course title and course objectives, so I need to make sure that I am covering my bases.

Concurrently, in another thread, while I was commuting this past week I was listening to some of my saved items in Pocket, and I was reading (listening to) this article on Hybrid Pedagogy by Sean Michael Morris, Pete Rorabaugh and Jesse Stommel titled Beyond Rigor. This article brought me back to thinking more about academic rigor and what the heck it really means.  I think it's one of those subjects that will get a different answer depending on who you ask.  The authors write that:

institutions of higher education only recognize rigor when it mimics mastery of content, when it creates a hierarchy of expertise, when it maps clearly to pre-determined outcomes

I suspect that's partially one definition of what rigor is thought to be, however I've come across courses that I've personally found that they were lacking rigor but they met those specific requirements mentioned above.  Sometimes I've found that rigor has to do with the level of expected work that a learner does.  If we think of learning as exercise and school as a gym, the analogy of a rigorous workout is something that raises your heart rate, burns calories, and gives your muscles a work out. At the end of a rigorous workout you feel tired. Luckily for exercise folks that stuff is easily measured.  I know that I did something rigorous when I feel exhausted after the gym. However, when it comes to learning we don't have instrumentation that is as easy to use and assess.  So, what the heck is rigor in a college course? How can we define it? Is it a malleable concept or is it hard set?

Interestingly enough the authors approach rigor not from what the attributes of the content are (i.e. who much of it, and by whom), but rather from an environment aspect. Rigor emerges from the environment rather than being a predefined constant.  In order for rigor to emerge when the environment is engaging to the learner, when it provides a means to support critical inquiry, when it encourages curiosity when it is dynamic and embracing of unexpected outcomes, and finally when the environment is derivative.  This last one was defined as a "learning environment is attentive and alive, responsive not replicative."

The one constraint I have with this course (well, other than the course description ;-) ), is the textbook. The department uses the Systematic Design of Instruction by Dick, Carey, and Carey as their foundational book and model. Last summer I developed the course from scratch using DCC as the core organizing principle.  Now, while still important, after the update to the course title and description DCC must share the stage with other elements, so I am re-considering (again) what rigor looks like in this environment.  I am pondering, how I can rework an introductory course to be derivative and to give students a more control in shaping the curriculum (thinking rhizomatically here) beyond having to choose from some finite options...

At the moment, rigor for me is still one of those "I know it when I see it" things.  It would be interesting to discuss this a little further with others who are interested on the topic to see where we land on it.

On another note, and rigor aside, the two things I am keeping and/or expanding are mastery grading (you either pass or you need revision) - I am not going back to numerical grades for anything.  I would prefer that students focus on feedback rather than the numerical grade.  The other part I am keeping is digital badges.  They worked fine last summer, I just need to figure out how to make them better.


Gimme an El! Gimme a Pee! Gimme and Ess and an Ess!

What does that spell?  elp-ss-ss ;-) OK...well that sounded more funny in my head...

Anyway! Week 5 of NRC01PL (last week! All caught up! yay!) was about Learning Performance Support Systems.  My first introduction to LPSS (a brief one at that) was in an instructional design course almost 10 years ago (if my memory works).  The funny thing is that we did talk about LPSS (without using that label) in a Knowledge Management course while I was doing my MBA.  The lesson here?  Interdisciplinarity is indeed a thing worthwhile practicing! :-)

When we learned about LPSS way back when, it was within a corporate learning context. The idea of an LPSS, which in my knowledge management course tied into communities of practice, was that employees, who are also learners, have access to a system to get realtime, just-in-time, help with whatever they are doing.  An example of this might be, for example, a short video on how to print something from your computer to a networked printer in your office.

Of course, the LPSS that was discussed last week in NRC01PL was not this type of LPSS.  This LPSS seems more like a place to bring together all of your learning.  I created an account†  and had  a look around. I know it's still in beta, or perhaps even in alpha, but I don't think this type of service is really for me.  I am feeling like an old fart for saying this, but I feel a bit saturated with services at the moment.

From the introductory video LPSS it seems a little like services like degreed that allow you to track and account for all courses that you complete, every book you read, every article you read, and so on.  I did try degreed for a while in order to keep track of things that I do on the web, but within a few days I shut down my account. I also tried another similar service on the web, but I forget what it was now.  Being able to track all of accomplishments is pretty nifty, don't get me wrong, however I already have a ton of online services that do that for me. Goodreads for example keeps track of what I've read.  Zotero and Mendeley do that for academic articles. Diigo & Delicious keep track of what I share on twitter.  I also have a CV where I keep track of all MOOCs I attended (among other things). I used to keep track on Class-Central, but I've stopped updating that profile ages ago.  The question is, do I really need to go in and manually input stuff again? Or should I just be able to point services like these to auto-import my badges, certificates, attendance records, and so on?

I feel like I've input my degrees and educational information so many times, in so many services, that I don't feel like doing it again... I eventually stopped updating my info on various services because it's a hassle to keep it all up to date :-) I am also getting annoyed by services like ResearchGate that ask me to upload copies of my papers on their service because they don't have a little URL field in their service.  I prefer to post in open access journals.  Most of what I write is already open access, with a URL that points to it.  Why not allow me to simply give a URL (feeling cranky toward RG today hehe).  If I have an ORCID and a Google Scholar profile, why can't other services tap into that?  Publons seems to be able to, why can't others?

It's not all doom and gloom with LPSS though, and the fact that it's in development means that there is potential to fix these issues. It also seems that the LPSS system learns more about what you like and what you don't like, and it works as a recommender system for your own learning, which seems really cool.  It actually sounds a lot like a system I proposed a few years ago about MOOC recommender systems (and published on this blog...somewhere) ;-).  I think that this is probably the real strength of the LPSS - good recommendations for further learning, but it needs data in order to do that, and I think that's probably the most challenging part of this. Stephen called this deep learning analytics the learning analytics for one person across many services and platforms.  This can be pretty tough to accomplish given the closed nature of some platforms. I wonder if digital badges could help address this in some way.

A thing that jumped out to me was that the end goal of the LPSS is not to act simply as a teaching system, but rather to act as a learning system to help learner accomplish their goals. This is done through subsystems that talk to one another such as the Personal Learning Record subsystem, Resource Repository Network subsystem, and learning analytics.  I did wonder though, if the intent is to put the learner in control of their own learning goals, would it make more sense to have all of this in a system that is not multiuser (and if it is, it's electively multiuser), so that the learner has a domain of their own, and they can have it hosted like WordPress on any server they want.  Does the centrality of LPSS go counter to the learner in control argument? Just a thought.

Finally, an interesting statistic shared last week: Carnegie Mellon research seems to indicate that the amount we need to memorize for work has gone down. In 1986 it was around 75%, in 1997 around 20% and in 2006 around 10%.  So memorization not something that should be a goal. Lifelong learning is the goal.

That's all for now.  What did you think?

† creating an LPSS account reminded me that I haven't used ProSolo in ages. We used ProSolo back in the days of DALMOOC. I logged into ProSolo, had a look around, and left.  Not much for me there at the moment. Too bad, because its potential seems to have fizzled.  I wonder what others think


Seeking the evidence


In my quest to catch up on Pocket - before the news becomes stale, I came across this post by cogdog on seeking the evidence behind digital badges.

The anatomy of the Open Badge include the possibility of including links to evidence for the skill that you are being badged for.    Of course, just because there is an associated metadata field available for people to use,  it doesn't mean that people actually use it!

I know that the evidence part of badges is something that is often touted as an improvement over grades in courses, or diplomas, because grades don't indicate what specific skills you've picked up, and this problem is a bit worse with diplomas and graduation certificates because you can't evenly compared one candidate to another (let's say in my case it would be comparing me to some other computer science major from another university - or heck even my own university).

Anatomy of badge, by ClassHack

So, in theory, badges are superior to these other symbols of attainment because they tie into assessable standards (that a viewer can see) and it ties into evidence.  And, again in theory, the layperson (aka the hiring manager) can read and assess a candidate's skills and background.  In practice though the evidence is lacking, and I am just as guilty of this having issued badges in two of the courses I teach. From my own perspective-from-practice I see at least two reasons for this lack of evidence:

1. Not all badges are skills based, and the evidence is not always easy to show.
I use badges in my courses are secondary indicators. Less about skills and knowledge, and more about attitudes and dispositions.  So, I have secret badges, sort of like the secret achievements in xbox games, that unlock when your perform specific tasks.  I let students know that there are secret badges, and that they relate to forum participation, but I don't give them the criteria so that they don't game the system.  They objective it to reward those who exhibit open behaviors and learning curiosity behaviors.  Once a badge is unlocked then I make an announcement and people can have a chance at earning it too, if they want.  In cases like these the badge description acts as the means of telling the reader what a person did to earn that badge (i.e. helped fellow classmates at least 3 times in one semester), but I don't attach evidence from specific forums. That seems like a ton of work for nothing (since looking at text from disconnected posts isn't meaningful to someone)

2. Good enough for class - but good enough for evidence?
Another concern I've had when thinking about attaching evidence to badges that I issue is the fit for purpose.  Some badges are known to my students (let's say the 'connectivism' badge where students in an intro course create some deliverable to train their fellow students on the principles of connectivism).  For my purposes an assignment might be good enough to earn a passing mark.  However, my fit for purpose is not someone else's fit for purpose.  Because of this I have not included links to evidence.  Furthermore, some of the evidence is either locked behind an LMS, or it's on someone's prezi account, or weebly site, or wikispaces page.  The individual student can delete these things at will, so my links to these resources also become null.  So, there is an issue of archivability.

One of the things that cogdog mentioned was that "being badged is a passive act".  I think that in many instances being badged is passive and that has certainly that's been my experience in a number of cases.  However I have seen exceptions to this. There have been a couple of MOOCs that, such as the original(ish) BlendKit, and OLDSMOOC where I had to apply in order to receive a badge.  This allowed me, as a participant and learner, to say that I am ready to be evaluated and the outcome would be a badge if I passed.

What do you think?  Is the evidence more hype than anything else?  Can it be done better? If so, how?

Gamifying Learning - EDDE 803 edition

It feels like it's been a long time since I've written here.  Well, still here, still alive, still cracking away at those books, and articles, and assignments for 803.  Initially, before this course started,I thought it would be a walk in the park given my background in instructional design.  Maybe that was my error.  While, content-wise, it is a walk in the park (given my background) I think I swung the pnedulum a little too hard in the other direction looking to make this course more challenging for me.

So for one of my big assigments I picked gamification as a topic - a topic I knew a little something about thanks to two xMOOCs I completed.  However, instead of resting on my laurels and using what I had learned in those MOOCs, I decided to try and read at least 5 of my (unread) books on gamification and games in the classroom (self-imposed goal) to gain some greater understanding on the topic before I wrote about it.

In the end, a lot of what I picked up was left on the cutting-board since the paper was 5,000 words max, and the presentation that accompanied it was only meant to be 30 minutes.  I feel like I've made great progress in my personal reading list, but maybe I increased the stress for 803 to 11 for no reason ;-) I ended up sleeping for 3 out of 7 days (not consecutively) a couple of weeks ago to catch up on sleep (and to get over whatever cold  I had caught).

I am kind of wondering if cosmic powers are pushing me to the limits of my ZPD with this course since I thought it would be a walk in the park, or if it's my own failure of self-regulation ;-) I need to go back to "this is good enough" mode lol.

Here is the presentation I came up with for class (it hinges on audience participation, but you get the gist). 24 days until the end of class.


Have you registered you badge?


When the Rhizo Team (well a subset of the Rhizo team) and I worked on the article Writing the Unreadable Untext for Hybrid Pedagogy we used Wordsworth's phrase “We murder to dissect”. If memory serves me right it was Sarah H. that initially brought this idea forward....or was it Keith? † That's the beauty of swarm writing, individual credit evaporates and it's what we accomplish together that feeds back to us as individuals.

In any case, it is this phrase that came to mind as I was reading a story on Campus Technology titled New Registry Will Demystify Badges, Credentials and Degrees, where the main crux of the story is that academia and industry are teaming up to create a registry with the intent of demystifying the value of different degrees, credentials, certifications, and so on. From the new story:
The registry "will allow users to easily compare the quality and value of workforce credentials, such as college degrees and industry certifications, using a Web-based system with information provided directly by the institutions issuing the credentials,"
This raised a bit of an eyebrow.  The first thing that came to mind is how much will this cost, and what is the ultimate benefit?  I am not talking about the cost of setting up the system, but rather, much like going gambling in Vegas, how much will it cost individual credentialing agents to be part of that conversation.  For example, let's assume that I run a training center where I train individuals on Microsoft Windows Server, or Active Directory.  I already give out a certificate of participation for those who make it and go through the steps, but I also want to give out badges - some more granular than others. Who will add those credentials to the registry? Is it me? or it is someone else?  Who vets those credentials?  Is there a system of peer review or can you just take my word?  And, how much does it cost to be listed?  The reason cost seems to come to mind is that for online programs, some states (such as Alabama, Arkansas, Maryland, and Minnesota) you need to be registered with the state, which costs money to register, and in some cases you need to place a collateral to be registered (I guess in case someone sues you).

My point is, how fair would such a system be?  Would it really demystify alternative credentialing or will it just re-enforce the existing power structures that we have with academia and professional organizations as credentialing bodies?  Isn't the point of an alternative credential that we are not working within the existing power structures and are looking for valuable alternatives to the way we do things now?  Do we murder our own initiatives in order to "demystify" them and compare them 1:1 with what already exists in the system?

Your thoughts?

† Sarah H. informs me that it was Maha who brought up the quote :)

Assessment of....?

Image from Flickriver, Brian Hillegas
A few days ago, and totally by stroke of chance, I happened upon a twitter discussion between @HybridPed@otterscotter, @actualham, and a few others.  I am not sure what the original topic was but I came in when they were discussing assessment. Do we assess learning or competency? Some regarded learning as transcending competency and some saw competency as transcending learning. It's hard to to really have a meaningful exchange of ideas in 140 characters, especially when the twitter train grows and grows.

When I jumped into this conversation I took the stance that what we assess is learning, not competency.  Competency, I would argue, is something that develops over a period of time. It is something you hone and improve.  Your skills (i.e. your competency is something) becomes improved the more your practice it. And, by practice, I mean being present while doing it and analyzing your own performance while doing the task, not just going on autopilot.    Learning on the other hand, for me, is learning distinct facts.  That the declaration of independence was signed in 1776. That the Greek War for Independence was in 1821, that many historians think of the assassination of Archduke Ferdinand kicked off World War I, that π = 3.14 (and goes on to infinity), and so on.†

From what I gathered, the individuals that saw learning as transcending competency think of competency the way I think of learning - i.e. I can demonstrate that I know how to code an HTML page (got a badge for that, I haz competency), whereas learning is something akin to lifelong learning. It is a skill you acquire to continue learning and it is something that can continue ad infinitum if the learner wants.  Both positions, in my mind, are equally valid because we are defining things differently.

Some types of learning are easy to assess, at least in the short term.  Things that allow the learner to regurgitate discreet pieces of information area easy to implement (short answers, multiple choice test, and so on).  Things that require the learner to demonstrate systems knowledge can be done at the individual class level if you can find microcosms of the system skills that you want to assess and extrapolate from that some broader competency, and it's a bit easier to do near the end of one's studies through a thesis or some sort of comprehensive exam because the learner will have had a broader set of learnings to draw from in order to explain what is going on in that system.

This issue of learning and assessment is big.  It's big in many fields.  It's big money for  companies like Pearson.  It's a big question for accrediting agencies.  It's big in the field of MOOCs.  I've most recently seen it in MOOCs where some claim that watching videos is learning, and some claim it is not.  Videos are just a tool. They can be used for learning, but they can also go merrily on in the background and they can become background noise.  I've had the privilege of being able to see live videos...aka lectures...when I was an undergrad (and a grad student sometimes too!). They were just as interactive as the videos I watch on OCW or various xMOOCs.

Even in courses that were interactive and active learning took place do I still remember everything 10 years down the road?  As part of my BA, focusing on computer science and minoring in Italian (and almost minoring in German, just needed 1 more class), I took courses in Italian literature, in German culture, in world history since 1500, the history of the Weimar Republic and WW II Germany. I learned ANSI C and Java. I learned SQL, and about automata.  Do I remember everything?  Hell no.  Does this mean that my undergraduate education is null and void? I don't think so.  It was just a little building block to get me to where I am now, despite the fact that I don't remember discreet pieces of information.  Even with my most recent MA in Applied Linguistics there are things that I just don't remember any more.  There are some things that are really vivid because I know them, and some that are vivid because they still trouble me today (Processability Theory being one of them).

I agree with Maha, who joined the twitter train on that topic, who says that some types of learning cannot as easily assessed as others. Maybe they'll take my instructional designer practitioner's membership card away for agreeing (LOL), but I don't think everything can be assessed by an ABCD method (Audience, Behavior, Condition, Degree‡). This might be doable in some skills, such as firearm training, but in many topics in education there is just too much fuzziness for ABCD to work without reducing assessment to a caricature.

Maha continued with another comment, which is also quite true, that assessment is no guaranty of lifelong learning.  I am sure I did well in all those classes I mentioned (I got the degrees to prove that I didn't fail anything), but the lifelong journey I am on has little to do with those classes specifically and more with my own curiosity.  I'd expand on Maha's comment and say that assessment is no guaranty of practice in that field either.  I completed my computer science degree, but I opted to not get a job in that field. Something else came up that seemed more interesting, and I haven't coded anything in Java or C since.  The closest I come to coding is HTML and Javascript on my own website.

So, the question is this:  beyond credentialing and certification, does assessment matter?  And if if it does matter, in what ways does it matter?  Take #rhizo15 for instance. This was a course♠, but how does one assess what I "learned" in it?  Does it matter to anyone but me?

† hey, I am channeling Latour with all of these examples!
‡ an example of ABCD is "Learners in INSDSG 601 with a blank chart of the Dick & Carey model will be able to demonstrate knowledge of the names of the phases of the Dick & Carey Model with 80% accuracy"
♠ Rhizo15 was a course, wasn't it? I guess that's a whole other discussion about what makes a course...