Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

Are MOOCs really that useful on a resume?

 Permalink

I came across an article on Campus Technology last week titled 7 Tips for Listing MOOCs on Your Résumé, and it was citing a CEO of an employer/employee matchmaking firm.  One piece of advice says to create a new section for MOOCs taken to list them there. This is not all that controversial since I do the same.  Not on my resume, but rather on my extended CV (which I don't share anyone), and it serves more a purpose of self-documentation than anything else.

The first part that got me thinking was the piece of advice listed that says "only list MOOCs that you have completed".  Their rationale is as follows:

"Listing a MOOC is only an advantage if you've actually completed the course," Mustafa noted. "Only about 10 percent of students complete MOOCs, so your completed courses show your potential employer that you follow through with your commitments. You should also be prepared to talk about what you learned from the MOOC — in an interview — and how it has helped you improve."  

This bothered me a little bit.  In my aforementioned CV I list every MOOC I signed up for(†) and "completed" in some way shape or form. However, I define what it means to have "completed" a MOOC.  I guess this pushback on my part stems from me having started my MOOC learning with cMOOCs where there (usually) isn't a quiz or some other deliverable that is graded by a party other than the learner. When I signed up for specific xMOOCs I signed up for a variety of reasons, including interest in the topic, the instructional form, the design form, the assessment forms, and so on. I've learned something from each MOOC, but I don't meet the criterion of "completed" if I am going by the rubrics set forth by the designers of those xMOOCs.  I actually don't care what those designers set as the completion standards for their designed MOOCs because a certificate of completion carries little currency anywhere. Simple time-based economics dictate that my time shouldn't be spent doing activities that leading to a certificate that carries no value, if I don't see value in those assessments or activities either. Taking a designer's or professor's path through the course is only valuable when there is a valuable carrot at the end of the path. Otherwise, it's perfectly fine to be a free-range learner.

Another thing that made me ponder a bit is the linking to badges and showcasing your work.  Generally speaking, in the US at least, résumés are a brief window into who you are as a potential candidate.  What you're told to include in a resume is a brief snapshot of your relevant education, experience, and skills for the job you are applying for.  The general advice I hear (which I think is stupid) is to keep to to 1 page.  I ignore this and go for 1 sheet of paper (two pages if printed both sides).  Even that is constraining if you have been in the workforce for more than 5 years. The cover letter expounds on the résumé, but that too is brief (1 page single spaced). So, a candidate doesn't really have a ton of space to showcase their work, and external linkages (to portfolios and badges) aren't really encouraged. At best a candidate can whet the hiring committee's palate to get you in for an interview. This is why I find this advice a little odd.

Your thoughts on MOOCs on résumé?


NOTES:
† This includes cMOOC, xMOOC, pMOOC, iMOOC, uMOOC, etcMOOC...
 Comments

EDDE 806 - Post VI.III - The one with Sir John Daniel

 Permalink
OK, I am almost 'caught up' with the stuff I missed while I was on vacation (at least as far as 806 goes).  I remember receiving an email from Pearl indicating that Sir John Daniel would be presenting. Too bad the internet wasn't that reliable :-/  Oh well, thank goodness for recordings ;-)

Sir John Daniel seemed like a pretty interesting  person, and very knowledgeable (with over 300 publications to his name) and he must be a respectable human being because he wouldn't hold 32 honorary degrees from 17 different countries if people only liked him for his scholarship.  I guess the bar has been set for me (haha! :-) ). The only area where I surpass him is in the amount of MOOCs I've taken vs how many he's taken.  Even as a recording it was great to get to 'meet' such a distance education heavyweight (maybe I can email him and we can go for some coffee and discuss the future of DE next time I am around his neck of the woods in Canada ;-)  ).

In any case, there were some interesting connections drawn between Open Universities (OU) and MOOCs. The OU UK was created so that it would be Open to People, Open to Places, Open to Methods, and Open to Ideas.  MOOCs, as he argued, could be seen as Open to People (Massive), Open to Places (Open), Open to Methods (Online)...but what about the "C" in MOOC?  What about the course?  I ask what does it mean to pursue a 'course' in something?  And, does the course have some sort of assessment?  He discussed a little about badges (whether or not there is assessment), and he brought up an interesting question: Quis custodiet ipsos custodes? (who watches the watchers?)  This was brought up in reference to ePortfolios, and to badging.  It's a good question and I think it's quite pertinent to higher education in general as well.

We - as a profession - have put a lot of emphasis and currency (κύρος) in lots of old institutions.  As Sir John mentioned, MOOCs may not be the transformative change in higher education that they were (wrongly I would argue) claimed to be back in 2012, however they've made online education more respectable. After all, as Sir John said - if Harvard is doing it, it must be OK.  While I don't have anything against Harvard, I think that this type of attitude is potentially damaging to our field (in general, not just DE), because people don't pay attention to the good work done by DE researchers until Harvard starts paying attention... and even then, they do reinvent the wheel at times because they haven't been paying attention.

This type of blindness is replicated in the scholarly publishing industry (MOOCs and Open Access are good threads between this presentation/discussion and the one with Alec Couros). It's hard to break into established journals and OA, so any new journal has an uphill battle regarding their  journal's rank.  University rankings are based on where you publish (at least to some extent?) so that influences where people try to publish.  A bit of a vicious circle.

But, it's not all doom and gloom. I think we can make a dent, and make OA, and Open Institutions who have been doing DE for a while now, more 'respectable' - and perhaps have those institutions viewed in the same respect terms as Harvard when it comes to DE courses and programs.

One take away for me, as something to look more into, is looking into the African Virtual University.  I don't know much about it, and it seems pretty interesting (both from its history, and what it does now).

Your thoughts?
 Comments

Instructional whatnow?

 Permalink
A number of threads converged last week for me, and all of the threads exist in a continuum.  The first thread was one that began in the class that I am teaching this summer, INSDSG 601: Foundations of Instructional Design & Learning Technology. One of the things that we circle back to as a class (every couple of weeks) are the notions of instructor and designer.  Where does one end and the other begin in this process?  It's a good question, and like many questions, the answer is "it depends".  The metaphor that I use is the one that calls back to two sides of the same coin.  In order for instruction to ultimately be successful you need both sides to work together.  An excellent design will fail in the hands of a bad instructor, and a bad design will severely hold back a good instructor (assuming that there is an instructor and it's not self-paced learning). There is the other side too: as instructional design students we were told that we would be working with SMEs (subject matter experts) to develop training, but how one works with SMEs is not clarified.  A good friend of mine, working in corporate ID, told me recently that communication with a SME is through an intermediary acting as a firewall and it's hard to get the information necessary to work on good instructional designs (now there is some organizational disfunction!).  The key take-away here is that you can't really separate out these roles. Both need to be informed from one another, and communication is key to successful training interventions.

In another thread, I was chatting with Rebecca (at some point or another this summer) about assessments and grading in the classes that we teach.  Another layer to this design and instruction challenge was added. You can have a really nice design, with lots of learner feedback and continuous assessment, but the situation might be untenable.  Take for example the case of an adjunct instructor (like me or Rebecca).  At our institution we are paid for 10 hours of effort per week for a specific course (each course counts as 25% FTE, and assuming a 40 hour workweek, each course is about 10 hours of work). These 10 hours include design maintenance work, synchronous sessions (if you have any), discussion forums, and assessment & feedback.  The design of your course might be awesome, but it might require more time on the part of the instructor than the organization has budgeted for.

So the question is how does good design sync up with organizational norms and constraints?  Organizational norms are something we've talked about in the class as well. Instructional design does not exist in a vacuum.   For the course that I teach in the summer I made it a little more "efficient" by using a ✓/✘/Ø grading for all assignments (submitted and passing; submitted and not passing-can revise; nothing submitted) which has addressed the issue of haggling for points to a large degree. This still leaves 43 items per student to be graded (and some level of feedback) to be given to the student.

I know that I am still spending more than 10 hours per week on the course, so the question - from a design perspective - is this: What is the most efficacious way of giving learners feedback on their projects and other aspects of the course while still staying within organizational constraints, and while adhering to sound (and researched) practices of pedagogy? In other words, what design options give you the biggest bang for the buck when it comes to teaching presence and learner outcomes?  Given that I've been more than happy to spend the extra time each week on the class, this is not a "problem" I need to solve for myself right now, but it is a design challenge for other colleagues!

The final thread in this came from twitter, when (out of the blue?) there was a twitter burst discussion on instructional design when Maha wrote:

@KateMfD how do u design a priori for someone you have not met??? Duh
@KateMfD to this day, I don't understand how Instructional Design begins w "needs analysis" before we ever meet the students!

JR added to the discussion by tweeting:
@Bali_Maha @koutropoulos @KateMfD but in a similar way, how do we know what courses we are going to teach prior to meeting Ss on day 1?
@Bali_Maha @koutropoulos @KateMfD not always a great starting point, but often attempting to benefit the organization, learner comes 2nd

I've been thinking about this and I've been trying to come up with a metaphor that makes sense. The metaphor that came to mind comes from the world of clothing and it's the dichotomy of Tailored versus Mass Produced clothing.  The textbook that we use in my program is the Systematic Design of Instruction, by Dick, Carey, and Carey, using the Dick & Carey model.  The textbook seems to indicate that as designers we have a ton of time to conduct a needs analysis (is the training needed), and a learner analysis (who are the learners), and a context analysis (where learning will take place), and to design a breakdown of what exactly needs to be learned.  And, sure, if we were instructional designers for the rich and famous, on retainer, we'd know a lot of this stuff ahead of time, and if those rich folks wanted to learn to paint, or water ski, or whatever, we'd have the luxury of knowing our learners, environment, constraints, and needs, and we'd be able to do something about it (we'd also be paid the big bucks!). This is what I call the tailored model - we have the luxury of taking all the measurements we need, and the client is willing to wait for the product.

The environment we work in, however, is the mass produced environment. In our day to day work as instructional designers we do our due diligence and try to do some needs analysis, but we also work from educated guesses of who our learners might be.  This is something that we've discussed (either on air or off air) at campus technology and AAEEBL this week with different colleagues.  How does one decide what programs to offer?  What courses fit into those programs?  What are the requirements for the program, and how each course's requirements fit into that puzzle?  Who are the learners who come into those courses?  The answer to that last question is an educated guess.  You might design a program, or a course, or a set of courses with a specific learner group in mind, however that persona is in-fact an educated guess.

Hence, using assumptions to start the process for that which is mass produced and we change it (or adapt it) on the fly as we get to learn who the learners are in our classroom. There are constraints in place to make sure that  the variation is "manageable" - and for a college program (at the graduate level anyway) that constraint is admissions.  By managing the admissions process faculty and departments know who is coming into their classes, and they can be prepared for that adaptation.  Further adaptation happens in class.  It's not complete adaptation since there are constraints, but adaptation exists (or, I argue, should exit). This way we're taking something that is mass produced, and tailoring it to the needs of the individual (to some extent anyway).  This is where design and instruction meet again - two sides of the same coin.


Thoughts?

 Comments

OLC - Dual Layer MOOCs

 Permalink
Here is the recording of the live session I was in where Matt Crosslin talked about the dual layer MOOC design.  I still question the notion of assessments in MOOCs :-)


 Comments

Grading Rubrics

 Permalink



The other day I came across this PhD Comics strip on grading rubrics. As a trained instructional designer (and having worked with instructional designers on and off since I started university as an undergraduate student) the concept of rubrics has really stuck with me.  That said,  I generally struggle with rubrics.

In theory they are brilliant - a way to objectively measure how well someone has done on whatever assessable assignment. On the other hand, they are not that great and they could be a means for discontent and discord in the classroom (the "why did you indicate that my mark is in category B when it's clearly, in my student mind, in category A?" argument). For this reason I try to create rubrics that are as detailed as I can make them.  That said, it seems that detailed rubrics (like detailed syllabi) are rarely read by students ;-)

Another issue arises with inherited courses. When I've inherited courses from other people that's also a source of an issue with rubrics.  It seems that their rubrics are less detailed and more subject to interpretation - which in my mind doesn't help the learner much - and it does little for consistency between faculty members who might teach the same course.  Here is an example (redacted to try to keep assignment and course somewhat anonymous. It is an intro course though):


Using this Rubric, I would say that two people (who don't know each other) teaching the same course can potentially be giving two different marks for the same assignment.  What's important here is the feedback given to the learner, so the mark may not matter as much in a mastery grading scenario, and the (good) feedback gives them a way to re-do and improve for a better mark if they want.

The more I design, and the more I teach, I am wondering if detailed rubrics are better as a way to on-board professors and instructors into departmental norms, and if broader rubrics are better for the "student view" and used with a more mastery-based approach to learning. :-/

Thoughts?


 Comments

Teaching, Grades, and the Impostor Syndrome

 Permalink

The other day I was reading a blog posted by Rebecca on marking and getting a sense of that impostor syndrome creeping in. I love reading posts like these because I still consider myself new to the teaching, even though I've been doing it for a couple of years now.  Some of the things that she describes are things that I have thought or experienced, and some are not.

In terms of an impostor syndrome, it hasn't come out for me with grading assignments.  In the past, when I have momentary panics or thoughts that impostor syndrome is setting in, it's usually around content-area knowledge!  Early on, when I started teaching, I wasn't even a doctoral student.  I was a practitioner and life-long learner, with a little research under my belt.  I knew enough, but I didn't consider myself the font of all knowledge - and that was scary.  What would learners think of me?  What if I was in a 'pop quiz' type of situation and the learners asked me some question and I didn't know the answer? Oh no! :-0

Luckily this only happened for a couple of semesters.  I quickly came to two realizations.  First, it's not possible for me to be the font of all knowledge on the subject I am teaching.  Researchers keep researching, things keep getting published, and it's not possible to keep up with everything in order to be completely up to date so that I could answer those unforeseen pop quizzes from students (which never came by the way).  I am not even 40 yet, so I don't expect to have the same amount of knowledge 'banked' as colleagues who have been active in the profession three times longer than I have.  It's just not a good metric by which to base your professional worth.

Another realization is that I shouldn't be the font of all knowledge.  Students can't just come knocking on my door whenever they have a question about some content area.  What's important is that we are all lifelong learners and that we exist in a network (how connectivist of me).  If we don't know something we can (and should be able to) find it through our network of humans and non-human appliances. As an instructor - a professor - I should have as one of my objectives to help them become self-sustaining, otherwise their degrees become not-as-valuable (some may say worthless) a few years after they graduate.

Once I got comfortable with these two propositions impostor syndrome went away for me.  In terms of grading assignments, I too am all about the feedback.  I dislike grades (The Chronicle had an article on grades the other day). I wish out courses worked on pass/needs improvement for grades as this would better align with how I design classes that I teach now.  As I was reading Rebecca post I reflected a bit on what it was like the first time I taught INSDSG 619 (now 684).

The course was a designed by a colleague to be an exemplar of how to design for online. What you read in the course on a week-to-week basis was also reflected in the design of this course.  I've written before about not feeling empowered to change things (other than changing readings to keep current).  One of the things I really disliked about that course were the rubrics for assignments.  Now, in theory rubrics are a good idea.  They describe to the learners what they need to do in order to pass the assignment.  However I found some rubrics so non-granular that basically everyone who put a little effort into it could get an "A".  There is nothing wrong with everyone getting an "A", however I noticed (over 3 years teaching that course) that the quality of projects would vary greatly, yet learners were still all getting an A or an A-.  That is because the rubric I inherited had only 3 levels, and the differentiators between the levels were so minute (in my mind) that a lower grade was really a result of a technicality (again, this in my view).

In any case, for the introductory course that I taught last summer, I decided to start from scratch and make all assignments pass/needs improvement. This way I can make an assessment as to whether something is passing (or not) and then focus more on giving feedback.  The main issue - when it comes to grades - is how do you differentiate a A, from a B, from a C?  The answer is imperfect: volume!  The more assignments you do that are of passing quality, the higher your course grade.  It's not something I like, but it works for now.  I guess I'll need to brainstorm more about this. The plus side is that I am not feeling impostory, so that's out of the way ;-)
 Comments

Seeking the evidence

 Permalink

In my quest to catch up on Pocket - before the news becomes stale, I came across this post by cogdog on seeking the evidence behind digital badges.

The anatomy of the Open Badge include the possibility of including links to evidence for the skill that you are being badged for.    Of course, just because there is an associated metadata field available for people to use,  it doesn't mean that people actually use it!

I know that the evidence part of badges is something that is often touted as an improvement over grades in courses, or diplomas, because grades don't indicate what specific skills you've picked up, and this problem is a bit worse with diplomas and graduation certificates because you can't evenly compared one candidate to another (let's say in my case it would be comparing me to some other computer science major from another university - or heck even my own university).

Anatomy of badge, by ClassHack

So, in theory, badges are superior to these other symbols of attainment because they tie into assessable standards (that a viewer can see) and it ties into evidence.  And, again in theory, the layperson (aka the hiring manager) can read and assess a candidate's skills and background.  In practice though the evidence is lacking, and I am just as guilty of this having issued badges in two of the courses I teach. From my own perspective-from-practice I see at least two reasons for this lack of evidence:

1. Not all badges are skills based, and the evidence is not always easy to show.
I use badges in my courses are secondary indicators. Less about skills and knowledge, and more about attitudes and dispositions.  So, I have secret badges, sort of like the secret achievements in xbox games, that unlock when your perform specific tasks.  I let students know that there are secret badges, and that they relate to forum participation, but I don't give them the criteria so that they don't game the system.  They objective it to reward those who exhibit open behaviors and learning curiosity behaviors.  Once a badge is unlocked then I make an announcement and people can have a chance at earning it too, if they want.  In cases like these the badge description acts as the means of telling the reader what a person did to earn that badge (i.e. helped fellow classmates at least 3 times in one semester), but I don't attach evidence from specific forums. That seems like a ton of work for nothing (since looking at text from disconnected posts isn't meaningful to someone)


2. Good enough for class - but good enough for evidence?
Another concern I've had when thinking about attaching evidence to badges that I issue is the fit for purpose.  Some badges are known to my students (let's say the 'connectivism' badge where students in an intro course create some deliverable to train their fellow students on the principles of connectivism).  For my purposes an assignment might be good enough to earn a passing mark.  However, my fit for purpose is not someone else's fit for purpose.  Because of this I have not included links to evidence.  Furthermore, some of the evidence is either locked behind an LMS, or it's on someone's prezi account, or weebly site, or wikispaces page.  The individual student can delete these things at will, so my links to these resources also become null.  So, there is an issue of archivability.


One of the things that cogdog mentioned was that "being badged is a passive act".  I think that in many instances being badged is passive and that has certainly that's been my experience in a number of cases.  However I have seen exceptions to this. There have been a couple of MOOCs that, such as the original(ish) BlendKit, and OLDSMOOC where I had to apply in order to receive a badge.  This allowed me, as a participant and learner, to say that I am ready to be evaluated and the outcome would be a badge if I passed.

What do you think?  Is the evidence more hype than anything else?  Can it be done better? If so, how?
 Comments

Assessment in MOOCs

 Permalink
The more I read chapter in Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future, the more I am starting to feel like Anton Ego from the animated movie Ratatouille ;-)  It's not that I am aiming to write harsh reviews of the stuff I read, but I kind of feel like the anticipation I have for reading some published things about MOOCs just aren't met with the appropriate level of satisfaction from reading what I am reading.

This time I am reviewing chapter 7, which is titled Beyond the Phenomenon: Assessment in Massive Open Online Courses (MOOCs).  The abstract is as follows:
MOOC course offerings and enrollments continue to show an upward spiral with an increasing focus on completion rates. The completion rates of below 10 percent in MOOCs pose a serious challenge in designing effective pedagogical techniques and evolving assessment criterion for such a large population of learners. With more institutions jumping on the bandwagon to offer MOOCs, is completion rate the sole criterion to measure performance and learning outcomes in a MOOC? Learner interaction is central to knowledge creation and a key component of measuring learning outcomes in a MOOC. What are the alternate assessment techniques to measure performance and learning outcomes in a MOOC? MOOCs provide tremendous opportunity to explore emerging technologies to achieve learning outcomes. This chapter looks beyond the popularity of MOOCs by focusing on the assessment trends and analyzing their sustainability in the context of the MOOC phenomenon. The chapter continues the discussion on ‘ePedagogy and interactive MOOCs' relating to ‘performance measurement issues.'

When I was a student in Applied Linguistics, for some of my courses I had professors who had us writing essays to practice answering questions in our field, but also staying on-point and not meandering, or including materials that were just not connected to the questions asked.  This was an invaluable exercise as it helped hone my skills as a writer.  Receiving peer review was also important, but I think having that in-class experience was really fundamental.   I think that it is this type of feedback that is missing in several chapters I've read thus far.

For example, this chapter (according to the abstract) asks what are alternative assessment techniques for MOOCs.  This is a good question! The author writes about the (on average) 10% completion rate in xMOOCs (the author is not that specific that they are xMOOCs, but from the text you can tell), and that other measures of what it means to complete a MOOC are necessary.  I completely agree with this position.  However, the author never really defines how completion is measured in those xMOOC contexts, and it's this that I find problematic. How can one start talking about alternatives (or additions to), when the current state is not really defined in terms of what assessment takes place in MOOCs to derive that magical completion number?  This is particularly important because the author  (in the solutions & recommendations section!) then goes on to describe CPR (Calibrated Peer Review) and AES (automatic essay scoring) which are used in coursera and edx MOOCs. These are tools used now to determine whether someone has (or has not) done what they need to do to be considered as completer. This doesn't really move the peg forward in terms of thinking about alternative (and alternatives to) assessment in MOOCs.

They talk (encyclopedically) about proctoring, MOOCs for credit, verified certificates, different MOOC 'types' (DOCC, BOOC, LOOC, SPOC, MOOR, SMOC), and digital badges (just to name a few things), but all of this is really disconnected from assessment in general.  Proctoring, credit, verified certs, and badges are by-products of assessment (not assessment types), and MOOC types don't really contribute much to the assessment discussion.  The language of MOOCs (i.e. the predominance of English) is discussed, but only really to suggest that existing assessment instruments (which only yield a 10% completion rate) be translated. OK. I don't disagree, but can't there be more substantive discussion here? How can this help more learners complete the MOOC. And, is a higher completion rate what we are looking for? Or is there more of a nuanced understanding of learning and assessment (and completion) in MOOCs?

I did chuckle a bit when I read that "a discussion forum is the main course component for active learner interactions and course participation in an online learning environment including a MOOC." (p. 125).  While xMOOCs tend to have forums, not all MOOCs are traditionally forum driven, and I would say that forums aren't the main course component for course activity.  I think by claiming this you are really framing MOOCs with the same frame as a certain type of online and distance education course for one thing.  It also predisposes one to think of activity (and what is assessable) in specific ways, ways which are defined by existing learning environments that have other underlying factors that influence and impact their design.

I really wanted to like this chapter (I really did :-) ), however between the disconnected information and the failure to deliver what what promised (or at least what I read into the abstract), it's hard to say that it's a must read.  That said.  If you completely ignore the title of the chapter and ignore the abstract, it's not a bad summary of current and potential topics to consider in the credentialing of MOOCs.

Have you read this? Your thoughts?



CITATION:
Chauhan, A. (2015). Beyond the Phenomenon: Assessment in Massive Open Online Courses (MOOCs). In E. McKay, & J. Lenarcic (Eds.) Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (pp. 119-140). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-8324-2.ch007
 Comments

Assessing the process or the product?

 Permalink
The other day I came across a post on ProfHacker written by Maha B. where she talked a bit about her own teaching experiences and whether one assesses the process of learning or the product of learning.  I was thinking about this question in light of my own experiences as a learner, as a designer, and as an instructor who now has had experiences in introductory courses, capstone courses, and intermediate courses.

Obviously there isn't a hard and fast rule about this.  In some courses, or parts of courses the final product matters enough so that it outweighs the grading component of the process.  My gut tells (and the educator) me that the process is really more important than the final product. However, my own acculturation into the field of instructional design snaps me back to SMART outcomes (you know, specific, measurable, accurate, realistic, and time-bound) wherein these goals are really about the product and not the process.  I suppose if you have the freedom to tweak the learning objectives of the course so that you can value the process more than the outcome then that is fine.  However, if you can't tweak the objectives you have a bit of an issue.

Another thing I was considering is the specific outcome of the class.  For example, in an introductory course I taught last summer I often leaned more toward process than overall quality of deliverable. This made sense since the learners were new to instructional design and, like artists, they needed to have several tries, attempts, and feedback attempts in order to become better at design.  The final product that they produced was pretty good, but it could be better given knowledge from subsequent classes. So, the final product was good for the amount of information and experience they had on hand.

On the other hand, this past fall I supervised the capstone project for my instructional design MEd program, along with friend and colleague Jeffrey K.  Even though this was a 3-credit course there wasn't really anything being taught.  It was more of an independent study where each learner worked on their own capstone project, and received feedback on each iteration submitted. While there is a deliberate process behind this capstone project, the final product is meant to be a showcase of their capabilities as instructional designers, given that they have completed at least 10 courses before they enter they sign-up for the capstone.  In cases like these, while process is important (feedback and improvement cycles), the final product is, in my mind, really what's being assessed.

That said, the case of the capstone is quite specific, and perhaps an outlier in this discussion.  In classes that I design I prefer to give more weight to the process and less weight on the perfection of the final product.  It's one of the reasons I have largely moved to pass/not pass grading in new courses I design.  Instead of having students feel like they've gotten a ton of points off for one thing or another (despite the passing grade), I think it's better for them to know that they passed the assignment and really look at the feedback that I gave them.  If in subsequent assignments they don't put that feedback to use, they may not pass subsequent assignments (and they can rework and resubmit), but what is important is that feedback-revision cycle.  I think that it really mirrors how instructional design works anyway :-)

What do you think?


 Comments

Lurk on, dude, lurk on!

 Permalink


The other day, while catching up on my (ever growing) pocket reading list, I came across a post from, friend and fellow MobiMOOC colleague, Inge on MOOCs.  It was a rather on-the-nose post about MOOCs, learning, assessment, and the discourse used in MOOCs about learners. Concurrently I am working with a Rhizo team on a social network analysis post where the topic of 'completion' came up, and we started discussing  the real connotations of completion.  How does one measure 'completion' in a MOOC? Is it a worthwhile metric? and what about engagement?  Finally, to add to this volatile mix of intellectual ideas, I am working on a conferece presentation, with fellow lifelong learner and MOOCer Suzan†.

These raw materials made me think back to the early discussions on MOOCs (before the 'x' ones came out) and discussions about lurkers in MOOCs.  Before the xMOOC came out we didn't seem to frame non-visible members of the community as 'dropouts' but rather as lurkers.  There were probably people who quit the MOOC, as in they came, they saw, it wasn't for them, they left - but we left the door open for them to be lurkers if they wanted to.

Early on I viewed MOOC participation sort of similar to the participation patterns in a community of practice (at least those that I had learned in school) which are visually depicted by the image in this post.  ~90% are lurkers, ~9% contribute, and ~1% contribute a lot‡.  In one of my earlier MOOCs, #change11, I engaged more with the idea of lurkers, and the main thesis I had (at least in retrospect) was that at most they were harmless onlookers, at worst they didn't contribute to the continued well being of the community.  I viewed (and still view) learning as a communal activity, so the more people participate in the network of learning the better the outcome for everyone.  It allows depth  of conversation, different discussions to take place, and diversity of opinion.  When a lot of people lurk, my concern was, that a critical mass for community purposes would not be available so that a experience learning could either not get off the ground, or it would not be possible to sustain it.

Fast forward to 2015.  After more than 100 xMOOCs, cMOOC, pMOOC, rMOOC, αMOOC, βMOOC, γMOOC, and other free online learnign experiences I am not really sure where I stand on the subject of lurkers.  Well, I do, but I am also conflicted.  See, learner choice is one big aspect of learning.  You cannot really force anyone to learn something, or participate in some experience.  This holds true for open learning experiences like MOOCs, and for closed experiences (paid courses, seminars, workshops, etc.). Intrinsic motivation is important in learning, and it's what pulls the learner through times, both easy and difficult.  In this aspect, if what motivates learners is for them to lurk, or just participate in certain weeks or modules, then that is not only perfectly OK, it should be encouraged.

The point of conflict, however, comes in kicstarting and sustaining the learning community. Let's say I am an open learning designer♠ and I have this awesome course I am thinking of designing for a certain demographic.  Sort of like hosting a party I don't want it to fail. I want people to attend, be engaged, and have fun (and learn something in the process).  What can I do to make sure that there is a minimum mass to sustain the course through it's x-week duration?  Do I do anything to recruit and tend to the learning garden? Or do I let it run wild, and if it succeeds - then great, and it fails (like a lame party), then that's OK too?

I guess what I am asking (and proposing a discussio on) what are some #altMetrics for MOOC success other than visible participant engagement, or 'course completion', or any one of the traditional success factors?  By de-coupling attendance from success metrics, I think we can be quite fine with having a ton of lurkers in our MOOC, and still having a MOOC be a success.  Lurkers get what they need, active participants get what they need, course designers get what they need.  It's a win-win.  But - how do you get there?

Thoughts?




SIDENOTES:
† You know, when I tell people (who already have a PhD) that I am pursuing my EdD online through Athabasca University I get a bit of a sour face. They can't fathom how you can develop academic relationships that lead to stimulating discussions (and papers) at a distances.  Between my cohort and the people I've met in MOOCs I think I have had more mental stimulation that people in residential programs - just saying ;-)

‡ Wonder if this triangle is a distortion, sort of like Edgar Dale's corrupted cone...

♠ Mark my words, Open Learning Designer will be a job title soon enough if it's not already. Prbably a type of instructional or learning designer ;-)
 Comments