Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

Academic Facepalm (evaluation edition)

 Permalink
Back in December, I was searching for the #tenure hashtag on twitter.   There was some discussion (probably stated by Jesse Stommel 😜) which prompted me to search for this #hashtag out of curiosity to see what was tagged.  Along with heartwarming stories of people who've just earned tenure (a nice perk right before the winter break!), there was this wonderful tweet specimen...



I'm not gonna lie.  IT BUGS ME.

It bugs me as a learner.  I've always completed course evaluations and I tried to give honest feedback to the professor.  If the course was easy, hard, just right, I wanted them to know.  If I was appreciative, I wanted them to know.  Yes, sometimes I've half-assed it and just completed the Likert scale with a "loved the course" comment at the end, but many times I try to be more concrete about the feedback.

It bugs me as a program manager. I am the individual who sets up, collects, and often reminds students, about the course evaluations.  My colleague is in charge of making sure things like these get into personnel files and maintains department records, and also seems to manager tenure and promotion paperwork for our department (among her other duties).  Faculty committees spend time discussing this each year for merit increases.  So. much. wasted. effort! 

It bugs me as an adjunct.  Yes, I teach for the fun of it. I like helping new instructional designers find their footing.  As an adjunct, if my course evaluations are bad I could be no hired again just for that.  There are no protections.  And, then you've got this tenured individual who openly flaunts their privilege.

Now, don't get me wrong.  I know that Level 1 evaluations are flawed.  They don't measure learning, they measure reactions to the learning event.  But they are feedback nevertheless.   If you don't give a bleep about what students say about your course, one day, despite your tenure, you might not have any students left...

As an aside, I feel like tenure is an outdated institution.  I'd advocate for strong unions over tenure any day of the week.

Your thoughts?
 Comments

Hidden Scholarship: reported achievements of academics

 Permalink
It seems like forever ago since I've read this article by Maha Bali on ProfHacker on Hidden Scholarship†. It's actually been on my radar for a while, but between work and class the mind space for this was not available.

In any case, if you haven't read this brief post on ProfHacker it's worthwhile reading. Maha writes about things that go under-reported, or not reported at all when it comes to scholarship by academics.  I think that a lot of things go under-reported, and I think part of it is that they aren't valued as much by our peers out there. One of the things that Maha mentions is peer review.  I am actually pretty happy that an academic social media platform (Publons) is working on this and their social network is based on creating some sort of record of peer review. You can see my profile here as an example. That said, it's really up to the peer reviewer to submit/forward their receipts from peer review systems and then the Publons system will work out whatever they need to work out to verify that you did indeed do the peer review for that article and list it on your profile.  That still leaves a lot of hidden scholarship (if you've been peer reviewing for a while anyway). Luckily (in a sense) I haven't peer reviewed much so I didn't have to think TOO hard about everything I was asked to peer review‡ so it was fairly easy to remember most things I did.

I agree with Maha as well that collaboration isn't always valued by peers at the same level (at least in my own contexts).  It seems to me that co-authored works tend to get fewer "points" in faculty reviews as compared to single authored works.  On a similar note, I find - again in my own contexts - that conference presentations are given fewer points than published papers. It seems to me that some works that might be more ephemeral in nature (like some conference presentations) should not be given short shrift because of their medium.  I think that those are just as valuable as a paper published in a peer reviewed journal.

Blog posts were something mentioned in the comments to the article as well.  Blog posts in general don't have the cache that other, more established forms, of scholarly work have - especially if you only post on your own blog! I've been asked several times in the past to contribute blog posts for different organizations and sites♠.  One thing that comes to mind - for me - is why drive traffic to your blog with my blog post, when I can just as easily post it on my blog.  I don't receive any peer review for it, so might as well keep it on my own playing ground.  In the past I had submitted blog posts to - now defunct - sites and those blog posts are lost for the most part (luckily some I had the foresight to keep the text for).  Some of those sites may argue that it could drive readers to this blog, but I don't have any aspiration of aggressively growing my readership.  I blog as a way of sharing what I know, to process new knowledge, and to engage with people¤. I think organic rather than forced readership is much more valuable. That said, I wouldn't mind having a byline on ProfHacker one of these days ;-)

One of my own contributions to this list of hidden scholarship is student advising. I don't think that academics get much credit for this.  Most academics stay up to date with their field. They read the contributions of their field, and they in turn incorporate that into any research they do, and they help guide students toward it that may not be at the right spot yet to be able to navigate there on their own, but for whom the research is important nevertheless.  I think that student mentorship (real mentorship, not just the quick 10 minute advising session each semester) is something that academics need to be recognized for as scholarship. Relating to this academic mentoring is helping/mentoring students to grow as professionals and researchers in the field.

What are items that would go on your list of hidden scholarship?



NOTES:
† just a quick date comparison indicates that it's been over two months at this point in time! Wow! Took a long time to respond to this :-)
‡ I am open for peer review gigs if you need a peer reviewer - just saying ;-)
♠ I prefer to not name them, fwiw.
¤ engage through twitter, facebook, google+, and on comments - generally speaking
 Comments

MOOC Standards...what do these look like?

 Permalink

The case of MOOC standards (as well as MOOC sustainability) is something that keeps coming back to me as a topic of pondering.  I read about it in other blogs.  Then, I want to respond to some of these articles, and bounce off some ideas, but I lose motivation and decide "m'eh" - this topics isn't much of interest.  Then, a little while later, my interest on the topic rekindles.  I thought it would be best to at least write something to keep this conversation on quality going (it might even motivate me to write more in depth...or collaborate with some colleagues to produce something more "academic").

In any case, the most recent thing I read about MOOC Quality, and what that might look like is from eCampus News from about a month ago (something sticking out in my Pocket to-read list). The article points to recent research published in IRRODL where the Quality Matters rubric was used to keep the quality under control in a MOOC. I haven't read the article - I've been busy with school, but it's on my radar for a deeper reading once this semester is over†. However, even though I have not read this article yet, I can tell you that my professional opinion is that the QM Rubric is the wrong measurement of measuring quality in MOOCs.

Don't get me wrong! I like QM, I am QM certified, and I have served in review teams for online courses that want to be QM certified.  However, the heuristics of small online courses, ones that are not open or free, are different from the heuristics of MOOCs that are open, often free of cost, and have an open entry/exit policy for people learning and/or engaging in them.  Can you design MOOCs to fit the QM rubric?  You bet!  Will those MOOCs be successful?  Maybe?  But if they are, they won't be  because a course was designed with QM in mind.

QM, as with any measurement that aims to be objective, has a specific ways of measuring what is of value, and when we use such measurements we tend to industrialize the learning process.  At this point in my development I'll be bold enough to say that it is inevitable that objective measurements create some sort of industrialization, but I may be wrong.  I think that the power of MOOCs is that we are able to break from the current mold of what we conceive as learning, and learning online, and learning in distributed environments, and try out new things.   We don't know yet what works, but we've only been at this for a little while.  xMOOC providers, such as udacity, coursera and to some extent edx, have a pressure to produce profit in some way, share, or form, to show their funders that this is a worthwhile venture - and that they won't go the way of FATHOM.  Some modified version of QM could apply to xMOOCs, but I think that what Siemens said recently is quite true. MOOCs (xMOOCs) are a regression of education - not a progression to the next big thing or 'aha' for us. cMOOCs, and other types of MOOCs that are more experimental in nature have that potential to show us some interesting things, but not if we shoehorn them into our conceptions of what "good online learning" is with frameworks like QM that are geared toward a different type of design and learner demographic.

Thoughts?




†and I also finish reviewing that edited volume on MOOCs...argh... the "to do list keeps getting bigger.

 Comments

Quality of MOOCs?

 Permalink



Continuing on with the review of articles in the book titled Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future today I have a chapter dealing with quality of MOOCs


Chapter 2 is titled Quality Assurance for Massive Open Access Online Courses: Building on the Old to Create Something New. The abstract tells us:
Institutional quality assurance frameworks enable systematic reporting of traditional higher education courses against agreed standards. However, their ability to adequately evaluate quality of a MOOC has not been explored in depth. This chapter, Quality Assurance for Massive Open access Online Courses – building on the old to create something new, explores the added learning and teaching dimensions that MOOCs offer and the limitations of existing frameworks. Many components of a MOOC are similar to traditional courses and, thus, aspects of quality assurance frameworks directly apply, however they fail to connect with the global, unrestricted reach of an open learning and teaching platform. The chapter uses the University of Tasmania's first MOOC, Understanding Dementia, as a case. MOOC-specific quality assurance dimensions are presented in an expanded framework, to which the Understanding Dementia MOOC is mapped, to demonstrate its usefulness to a sector grappling with this new learning and teaching modality. This chapter continues the commentary on – Policy issues in MOOCs Design, through the topic of ‘quality issues critical comparison – contrasting old with new.'

This was an interesting article, not because of the MOOC angle, but really about learning more about accreditation and peer review in an Australian context.  The MOOC angle seemed...a little off.  There are two big questions that came up as I was reading this article:

  1. Why does an institution offer MOOCs?
  2. How does one measure 'quality' in an educational context?


Now, I know that we have frameworks available to us as educators to quantify the 'quality' of our online courses. One prime example is Quality Matters.  However, I think that all quantified means of measuring human learning do fall short.  I've passed many courses in my days as a learner (especially in required undergrad courses) where I just checked items off the list.  I knew the lines I was expected to paint in, and I did so proficiently enough to pass tests.  Hence, quality-wise, I guess that means that the course was good, since I passed, the course and the course had gone through the requisite steps of both internal and external review, but it doesn't mean I learned anything.


One of the proposals of the authors is that MOOC business models have failed to reflect 'reality' is because they have not  been integrated formally into university frameworks through quality assurance. I didn't see anything in this article that supports this hypothesis.  Quality is a tricky thing.  Unfortunately, for education I don't think that there isn't one simple solution to obtaining and measuring quality.  We have, in my opinion, come up with a system that tries to keep honest people honest, however I don't think this system of peer review, internal and external review, and course evaluation are any indication of quality.  Quality seems a bit elusive as a concept because it means different things to different people.

The type of quality we see described in traditional contexts is that of design.  Making sure that (a) goals and objectives match the (b) instructional activities , and that (c) assessments tie back to objectives, and that materials used tie back to a + b + c. This is a simplified view, but it's all about connecting the dots in course design.  Actual learning and application - once the class is over, is not usually something that is testable.  In the parlance of Kirkpatrick's model of evaluation, we undertake level 1 and level 2 evaluations, but we are not able to conduct level 3 or level 4, which would require us to have access to the learners after the fact for further testing.  In graduate programs where there might be a final capstone, portfolio, comprehensive exam, you might be able to conduct level 3 evaluations to some extent, but that's about it.

So, when we're talking about "quality" in MOOCs it's important to figure out what we mean by quality.  The other thing that makes MOOCs, in my opinion, a bit harder to assess, especially in implementation, is the variable learners in the course.  In traditional assessments of courses we know that courses need a minimum number of students to run (a business decision), so faculty can plan potential activities knowing the lower and upper limits.  In a MOOC this is pretty hard because registrations mean nothing. How are outcomes measured when there is a lot of potential flux?

In terms of making the decision to offer a MOOC, the big question is why do universities do this?  What's in it for them?  The public education and access mission of some schools might be a reason, but given the costs described by the authors of making a MOOC, why go through these steps?  Why not focus on OER development or something cheaper? I am sure that there is still hope for the academic youtube channel ;-) The authors write, rightly so, that MOOCs are not an easy path to revenue, so I am curious as to the reasons institutions decide to offer these MOOCs (other than the "they are new and shiny, and we must participate!" type of reason).

The authors, going back to quality assurance, claim that the "traditional approach of utilising external peer review to ensure that the course level learning outcomes are appropriately calibrated still has merit in the MOOC environment".  To a small extent I agree, if you are talking about specific xMOOCs with specific outcomes, and specific limitations. However, I am reminded of a comment a friend and colleague (Maha B) made somewhere online (twitter? blog? facebook?) about feeling constrained when she had to fully develop the course structure of a (traditional) online course before the course started. This didn't leave much flexibility for learner interests.    I see where Maha is coming from, and for experienced educators, while it makes me nervous, I keep an open mind.

Personally I like everything planned ahead of time for two reasons (1) I know an overall path I've designed, and I can work with it and help guide novice learners on rails, and I can also defend the design when it comes to a curriculum committee; and (2) it helps learners plan the semester to have something on rails.  That said, I do not like being rigid in my teaching - just because we have a roadmap it doesn't mean that we can take the path least travelled, or even go off the road.  This is little sidebar was with regard to 'regular' courses.

With MOOCs - given that they are a form of online education that we are still studying in their nascent state, to try to pigeonhole them into a rigid structure that was built in order to ensure that college credit was worth something comparable between institutions.  MOOCs are not credit-bearing courses. They are optional, free, open to entry and exit, and they don't award any college credit.  So why try to slice them and dice them by using measurements that are created for credit-bearing courses when the actual ethos and purpose of such courses is not the same as credit-bearing college courses?  Furthermore, MOOCs (again depending on the course) can be completely undeveloped from at the beginning.  There can be connecting and connective threads going from week to week, however the entire course structure need not be completed from the onset.  This is one of those constraints that exists with credit-bearing courses, but there is no reason for it to exist with MOOCs.

In the end, I think that the concept of 'quality' in a MOOC won't elicit a unified definition of what that looks like.


Thoughts?




Citations:

Walls, J., Kelder, J., King, C., Booth, S., & Sadler, D. (2015). Quality Assurance for Massive Open Access Online Courses: Building on the Old to Create Something New. In E. McKay, & J. Lenarcic (Eds.) Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (pp. 25-47). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-8324-2.ch002
 Comments

Perspectives on Late point deductions

 Permalink
I guess is teaching preparation time!  These past few weekends I've been going through my online course, updating due dates for assignments, and slowly starting to make the changed to the various modules that I had scribbled down as the course was in progress last spring.  It's still up in the air as to whether or not the class will run so I am thinking of applying for an assistantship for this fall semester.

In any case, in preparation for this course (if it runs) I've signed up for a variety of MOOCs on Coursera and on Canvas.net that deal with the subject of teaching online. I figure that this is a good opportunity for me to get some professional development, but also to discover any materials that I was unaware of. This way I can share these materials with my students (the course is about course design and teaching online).  My Pocket reader had filled with a lot of reading to go through and evaluate.  As I was reading some of the materials this one stood out to me: Enough with the late penalties.

This is actually something I've struggled with in my (brief) career as a professor (still sounds weird to call myself that). In the first few semester of teaching I didn't really deduct points for lateness.  I had a clause in my syllabus, but I never really exercised it unless the late submission was egregious, like being a few late days without an excuse.  Last spring I decided that to be fair and equitable I needed to really apply the late penalty on assignments equally across the board.  Thus, if an assignment was a day late, it got 5-points shaved off the top. Two days late? Another 5-points off, and so on.  What I noticed was this: The percentage of students who were late remained the same, they were just getting points off now.  The percentage of people who were on time remained the same.  There was only one small group that was late by 30-minutes every now and again as most adults have competing deadlines and sometimes fall behind on some things.

Going back to the article†, I agree in principle.  When we assess our learners we ought to be assessing what they are producing.  Are they demonstrating that they've learned what we set out for them to learn at the beginning of the semester? We are not assessing their timeliness in submitting their materials, so why take off points?  I suppose that I can add an objective to my course indicating timeliness, but are we working with adults here or not?  In the grown-up world there are deadlines with consequences. If you are late with the submission of an RFP bid your company many not get the contract. If you are late with your deliverables at work it may cost the company money, or in academia if you are late in contacting perspective students, chances are that they go elsewhere and you miss out on some potentially brilliant scholars in the making. There are consequences in life to being late, so why not apply this toward grades?

The article has a few suggestions, such as having a due date window where students can submit their work.  I am curious to know how this is different from a due date.  At the end of that window you have a firm deadline.  If you have advertised that an assignment is due on August 15th, and it is now July 10th, doesn't this give the learner enough time to plan? I wouldn't give someone more points if they submitted early, but what I would give is more feedback so that they can improve their submission and resubmit (for a better grade if they wished).

Having seen students who submit stuff late I am wondering what the best approach is helping the learner, but still getting stuff in on time because that matters.  I am not convinced that deducting points for lateness is not a good idea, but I am interested in the debate. 

If you teach, or if you are a learner, what are your thoughts?


† I would say that the article deals with children in a K-12 environment, but someone might make the argument in adult learning situations as well.
 Comments

SPOCs are illogical

 Permalink
Angry Spock (Star Trek reboot)
OK, OK... the title was easy pickings but this article is quite serious.  I've chosen to ignore, for the most part, the whole idiocy of the term SPOC (small private online course).  SPOCs are really just "regular" online courses, as I've written in my one other post about SPOCs. It bothers me that there is so much revisionist history around the topic of "traditional" online education with articles such as these where organizations like Colorado State University claim to be "pioneers" in SPOCs since they've been doing online education for the past five years.  A whole five years? Our fully online Applied Linguistics MA has been around for eight years, and our overall organization, UMassOnline, has been around for about ten years doing "SPOCs." Maybe we are pioneers too, who knows, but it's really difficult to critically discuss MOOCs, traditional online education and flipped classrooms when people muddle the water with SPOCs, another useless acronym that overlaps with currently existing terms.

So, I was pretty content to just ignore SPOCs, but this blog post came across my twitter feed (I think courtesy of EDCMOOC) that I couldn't ignore from a philosophical perspective. Well, it was this article, and the mentioning of the term from a colleague of mine which made me almost gag that really was the impetus for this post. So, in this article SPOC has been succinctly defined as:

The term “SPOC” has been used to describe both experimental MOOC courses where enrollment was limited as well as packaging options whereby academic institutions can license MOOC content for professors to use as components in their traditional courses.
This is a good place to point out that an "experimental MOOC" is redundant.  ALL MOOCs at this point are experimental.  We haven't cracked this nut, so we're experimenting with large scale online courses and various evaluation mechanisms in an environment where we're not worried about accreditation and academic honesty as much.  Sure, we pay lip service to academic honesty by clicking the little "I am in compliance with the honor code" button, but at the end of the day no one is risking their reputation, as far as academic honesty, retention and measurable outcomes, goes.

Beyond the whole experimental thing, I should point out, and will go into a little more elaboration later on in this post, that MOOCs and licensing are antithetical to one another.  Part of MOOC is Open.  We can argue all day about what "Open" really means, but at the end of the day the Open in MOOC was intended to be Free to use, Free to Remix,  Free Repurpose, Free to Feed Forward. But for now, let's focus on the limited enrollment:
One of the most successful limited-enrollment MOOC/SPOC classes was CopyrightX from Harvard that only allowed 500 who made it through an application process to enroll. The course was still free, but students who took part were expected to be full participants (not auditors or dabblers), and the combination of limited enrollment and a decent-sized teaching staff meant that students could be given research and writing assignments that would be graded by teachers vs. peers.
Last summer I was having a chat with a respected friend and colleague, over beers, after the end of Campus Technology 2013. My colleague works for an entity that deals in MOOCs, and the organization does cap courses for one reason or another.  When I discovered this, I shot off the first volley and proclaimed that those courses weren't MOOCs if they prevented more than X-amount of people to sign-up. An interesting discussion ensued whereby I was able to work out and better articulate (and understand) my own positions on MOOCs and course caps.  At the end of a very interesting discussion this is what I came up with:  It's perfectly fine to have an enrollment cap in a MOOC if it's about one of two things: (1) You are either unsure of the various technology pieces and thus you can to hold some variables constant while you stress test your system.  After all, you don't want a repeat of the Georgia Tech MOOC #Fail. And, (2) the other acceptable, for me anyway, reason to cap the course is to experiment with some sort of new pedagogy, design or technique and you want to make sure that you aren't juggling too many things; thus having fewer students is preferable to research purposes.

That said, even with lower course caps, this doesn't make it any less of a MOOC.  After all, as I have argued in previous posts, Massive is relative. Some courses will garner 100,000 students because the barriers to entry are lower, and others will only get 100 because the barrier to entry, such as previous knowledge that is discipline specific, is pretty high.  Further more, the CopyrightX course isn't really a MOOC, in my book.  Not because of course cap, but the way they approached the course.  They expected each and every student to participate based on their own rules, and they treated the course like a web-version of a large, auditorium delivered, course. This came part and parcel with the assistants that they had to help out in the course. This wasn't a MOOC. Perhaps it was more along the lines of a traditional online course, but calling it anything other than a free traditional online course is disingenuous and shows that there is no understanding of past precedents.  Next we have the who sticky issue of licensing.
 The licensing of edX content to San Francisco State College that caused such a ruckus earlier this year represents the other phenomenon being commonly referred to under the name SPOC.  In that case, the same material you or I would see if we enrolled in a MOOC class (such as the lecture videos, quizzes and assignments associated with Michael Sandel’s HarvardX Justice course) would be given to professors who would be free to pick it apart and put it back together in order to customize their own classes in a way that represented their preferred combination of their own teaching resources and third-party materials.
I have two problems with the notion of licensing of MOOC content.  Both of my issues are philosophical. First, as I said above, we've established that MOOCs, have an Open component for use, reuse, remix, and redistribute.  This also happens to be in the tenets of Open Educational Resources.  Sure, with OER you are technically providing materials under an open license, but the language used in the discussion over licensing of MOOC content is really much more commercial in nature.  It's seen as a way of making money for the venture capital funded MOOC LMS platforms like coursera and udacity.  In addition to the philosophical issues of what constitutes open, I have an issue with the crazy amounts of money pumped into VC funded ventures, which inevitably might likely raise tuition and fees for students who are paying to get their accredited degrees. So, in addition to signing contracts with these companies, and giving them the right to redistribute the content, and handing over a considerable chunks of change to design or run these courses, we have content locked up in a closed system. This is a far cry from the Open we envisioned before EdX, Coursera and Udacity came onto the scene.

This reminds me of parallels in the academic journal publishing industry.  Authors do the work for free.  For the most part editors also do the work free.  Journals however cost, and they cost our libraries a pretty penny to have access to journals that those same authors (and their students) are members of.  If you are designing MOOC content with the intent of making a profit from it by reselling it to classroom flippers, then you're not making a MOOC. You're just developing content, like people have done in the past. If MOOC content is available freely for use in other courses, small, large, campus, online, flipped, blended or whatever - you don't need to call it by a new name.  Just use the OER like we've used it before.

Your thoughts on the matter?
 Comments

#edcmooc: One man's dystopia...

 Permalink
Seems like Week 1 of #edcmooc is now done, and I've read (or in some cases reviewed) the readings and videos that they had posted as resources for Week 1. During the Week 1 live session recap and discussion there was an indication that there were 20,000 registrants for the MOOC.  I'd be interested in seeing how many of those 20,000 follow through and "complete" the MOOC, whatever "completion" means to the organizers of the course. For that matter, I'd like to know what "completion" means since, unlike other Coursera courses, there are no silly quizzes at the end of each week.

I understand that some people want some sort of formative assessment, but I tend to think that multiple choice quizzes are not adequate to indicate whether people "get it" or not.  I suspect that in this course there will be an "aha moment" around week 4 when it suddenly clicks for people.  If you are in #edcmooc, and are reading this blog post, my recommendation would be to go out an read other's blog posts, discussion forum posts, and then write your own blogs (or personal learning journals) to keep track of the thoughts in your mind :) Oh, and don't forget to comment on other's blogs and add your blog to the Edcmooc news feed.

One of the things that came to mind this week was that one person's dystopia is another person's utopia, or at least daily life which makes them neither happy nor unhappy. One of the short videos included in Week 1 was Inbox (see embed) which my colleague and I use when we co-teach a course on Multimedia in Instructional Design. My colleague and I include this video to have students brainstorm about the use of media and what the entire "package" conveys. It serves as a way to begin a discussion around a critical analysis of multimedia.



That said, this video got a lot of responses (as did the other videos for that matter), but there are two things that really stood out for me in the discussions:

  1. the mode of communication in this video is text
  2. there are no auditory utterances throughout this video

This lead to some discussion on how we may have moved from aural/oral communication to written communication being the norm, and I sensed that this may have been said with some sort of lament. On of the fellow participants said that her students at school, once the class period was over, just jumped onto their electronic devices and start texting (or tweeting, or whatever), and this was seen as a negative, at least from what I read.  the silence of the corridors was deafening.  I wish I had kept persistent URLs for those discussions.

In any case, it seems to me that seeing communication as an aural/oral affair is pretty limiting.  There are many instances when communication is anything but aural/oral.  For many deaf the main means of communication, even when two interlocutors are face to face, is sign language.  Sign language is also something used in sports and combat even among those who are not deaf. While a lot of communication is aural/oral, it's not the main mode.  Furthermore, I would argue, that communication these days can be predominantly textual. If you add up all of the hours that we experience oral communication such as chatting with friends and family, water-cooler talk, listening to the radio, or watch television (if we consider the aural element); and if we compare that to us reading books, letters from friends (aren't those nice when you get them in snail mail?), newspapers, reports, essays, blogs, webpages, and, yes, text messages and emails, I would say that we've been predominantly text-based for quite some time.

I don't want to pass judgement on this, for me it just is. Each individual and their individual situations will be affected differently based on a variety of elements.  For some, it's a bit of a dystopia because we are potentially losing that "human element." Then again for others it might be that connection that wouldn't otherwise be there if the technology didn't enable it.  It's hard, for someone outside of the situation at hand, to really be able to gauge whether this is "bad" or "good."  When it comes to technology shaping our lives, at the end of the day, I tend to be a on the technology equivalent of the Whorf-Sapir hypothesis: We shape the technology, it then shapes us, and we shape it again in an feedback loop. I guess others, including Winston Churchill and McLuhan have said similar things.

There were quite a few interesting points to all of the articles for week 1, but I really don't want to rehash everything from those articles.  The one article that really stood out for me was the First Monday article on Diploma Mills from 1998. I wasn't really paying attention to the higher education scene at the time because I was busy graduating high school and entering college.  For me, at the time, college was nothing more than compulsory schooling where you chose your career path and were looking for a job at the end of that journey.  It would be a understatement to say that I see schooling differently now, but back then I wasn't concerned with this "technology" thing and the threat of dumbing down higher education.

The thing that surprised me is that, even though this article is now 15 years old, the rhetoric used, and the fears expressed, are the same as those used in the Future of Higher Education working papers that completely go after online education now that MOOCs are the new thing that venture capitalists are looking toward for making some money, and, consequently, MOOCs are conflated with online education in general by those that decided to not play in the online education arena in these past 15 years.

Now, there are some comments made in the Noble (RIP) article that make sense, but seem to miss out on a number of points, even back then.
Experience to date demonstrates clearly that computer–based teaching, with its limitless demands upon instructor time and vastly expanded overhead requirements — equipment, upgrades, maintenance, and technical and administrative support staff
Yes, it's true that technology keeps improving, this equipment needs maintenance, upgrades, and support. However, the same is true of other technologies, including cars. I don't know anyone who would argue that because our roads stink and because we need to maintain them, otherwise potholes form, that's why we shouldn't use cars for day to day transportation.  I know it's a silly example, but a good parallel (in my mind) nonetheless.  That said, technology, and the demands on the faculty member's time, is actually a good thing in my book.  It's important for faculty members to develop succinct policies that they have for communication with their learners to keep the work from creeping into personal life.  That said, the old ways of coming into a lecture hall and talking for a few hours, and then having some office hours per week for student contact time are deader than a dodo (or at least should be).  Just as learners we don't learn from stale recorded lectures, so we don't learn from an instructor that just isn't present.  ICT have made an improvement in that arena as far as I am concerned (remember, one person's dystopia...)

Another interesting positition that has come back to make the rounds through CFHE:
Once faculty put their course material online, moreover, the knowledge and course design skill embodied in that material is taken out of their possession, transferred to the machinery and placed in the hands of the administration.
I've discussed this with fellow faculty members in recent years.  Your syllabus, or the notes you create for your class, are not why students are coming to your course.  People are coming to your course to interact with you, and to learn under your direction.  They aren't coming for the material.  If the material is all that they care for, they could have easily gone to the public library (or college library) and picked up the materials on their own and self-studied.  Your material means very little without you.  Now, that said, I do think that you should be compensated for the time and effort you put into creating an online course.  I know first hand that it is a time intensive endeavor and you should be compensated for it.  As far as copyright on a specific implementation goes...well a course can be implemented in many ways, and your way is just one way. At the end of the day, a collaborative approach to curriculum creation is the best way to go (at least according to me :)  )

As a side note for my fellow teachers: slapping a copyright notice on your materials and syllabus is tacky.  People will share the materials regardless, and won't credit you.  Just publish it under creative commons, non-commercial attribution, and be done with it :). As academics I think we ought to be just putting our material out there anyway and enriching the world. By publishing material under copyright you are making your material inaccessible, and, as far as I am concerned, going against the main mission of academics: to push knowledge boundaries.
Most important, once the faculty converts its courses to courseware, their services are in the long run no longer required. They become redundant, and when they leave, their work remains behind.
I think that this really strikes at the heart of the argument: fear.  Because people fear that they will lose their job, and they will be negotiating from a weaker position, they come up with all of these arguments against change.  Fear is a strong motivator (or demotivator), but at the end of the day they cannot take what's in your head. They cannot rob of your knowledge (unless they invent a "forgetful ray gun" and shoot it at you), so you have an ace up your sleeve.

Finally, a quick commentary on my own #edcmooc  participation in the forum --> I will just go in and up or down vote. forum items that I read  There is too much discussion to have my voice really stand out. I like reading what others read, but at the end of the day the coursera system isn't really setup to find people who you will follow through the duration of the MOOC, so I am limiting my "active" participation on blogs and twitter in true cMOOC style ;-)
 Comments (1)

Critique of Making your own Quasi-MOOC

 Permalink
With three MOOCs done (only undertaking one now), I have a little more time to go through and read what has been piling up in my Pocket account.  Now, over the past couple of years there have been a number of articles on building your own MOOC, from a variety of people.  Some in publications like Learning Solutions Magazine, some in eBook form, some in in Blog form.

One of the blog-form posts comes by way the blog "Managing eLearning" and the title is How to make your own MOOC. I was quite curious to see what the author had to write about the topic, but I was seriously disappointed when I read it.  My main issue with the article is that it ascribes to  a very centralized xMOOC, offered by an "elite" University.  I don't think that the "elites" have it right.  I applaud the exploratory spirit of some "elite" Universities, but they get many things wrong. So, building on this xMOOC model seems just wrong to me.  In this article there are 6 principles, or key ingredients to build you own MOOC. The article is really basic, and the headlines can possibly apply to almost any online course.  That said, let me deconstruct the areas that I have an issue with:

Underlying assumption of the author is that "massive" means "tens of thousands of users, ensuring that there is someone out there who is able and willing to answer almost any given question." and that this type of Massive is developed by "Brands. Educational brands. Big brands" like Harvard, MIT, Berkley, and so on.  I completely disagree. We had MOOCs before these "elite" universities decided to jump in with their interpretation of a MOOC.  The universities, or entities, doing this were not big brands, but people still went to them and had a learning experience.  Massive, as I have written before, is not a static amount.  Massive can vary depending on the subject at hand.  An introductory level Algebra course will be more "massive" than a graduate master's level advanced course in biomimicry. They can both be MOOCs, but the underlying requirements for the course will determine how many people actually sign up.  If a regular biomimicry course enrolls 8-15 students in a semester, then 150 students is actually massive for that course.

Next up, let's look at the author's "ingredients" for a MOOC.

Ingredient 0 --> An LMS: The author writes "If you don't already have one, you will need one. Social features, especially discussion forums, are a must." I honestly completely disagree.  An LMS is not a requirement for a MOOC, especially if we are considering discussion forums.  In a MOOC, the LMS discussion forum doesn't work well, let's face it.  We can work with the technology we have, but what it boils down to, and what we've seen thus far in the last 2 years of xMOOCs is that forums aren't well suited for this, at least in their current incarnation.  An LMS also does not address the design decisions of a distributed MOOC, where the LMS is a bit antithetical to that way of thinking about a course.

Ingredient 1--> Synchronous design: The author uses a slightly modified understanding of what synchronous is, so that's something to keep in mind. What the author suggests is that all learners need to move in lock-step. While building an online learning community is important, I disagree that MOOC learners need to keep in lock-step, moving through the same type of materials and activities.  I can see some people ahead of the curve, and some straggling.  The key ingredient is that community, not the synchronicity of the material.

Ingredient 2--> Short Learning Activities: Here we have a suggestion that we work on bite-sized learning activities, like Khan Academy.  While I agree that short videos are a good (compared to hour long cognitively overtaxing alternatives), but I disagree that Khan Academy style videos are "it" for MOOCs.  We are back to a didacting sage-on-the-stage approach that isn't really helpful. Sure, some elements of this might work, but it's not something that you can generalize across the board for all MOOCs, across all disciplines and across all levels.

Ingredient 3--> Require Peer Review: I think peer review is great.  Peer review for a grade, however, not so much.  Peers, even more knowledgeable (MKO) peers, aren't the subject "expert".  I can learn a lot from peer to peer scaffolding, however at the end of the day, my peers aren't necessarily qualified to grade me, and have that be my final "grade" for the assignment.  Additionally, while I do think that peers can learn a lot from one another, forcing peer review, in an open course seems antithetical to the open ethos of the course.  If I don't want to share my work, that should be fine.  If I want to share my work, that should be fine too.  Self-selecting, self-appointed peer review groups are preferable to forcing everyone to be in a peer review if they want a certificate of participation.  A better way to deal with this, in my opinion, is what OLDS MOOC did with badges for peer review.

Ingredient 4--> Required Group Work: While this may be good in "regular" online courses, I think that in the MOOC front it's too contrived and too antithetical to the find your own path.  In smaller, "regular" online courses where you have fewer students, requiring group work is necessary because people may not self-organized in ways that encourage active community engagement.  In MOOCs, however, this is much less of a problem because you will always have people there that are able to kindle the flame of community sufficiently for others to jump in. Adding required group work, for a course that gives students no formal credit at the end, for me, adds a barrier to entry.

Ingredient 5--> Teaching Assistants I've seen quite a few MOOCs run without TAs.  Remember, just because 100,000 people registered for your MOOC, not everyone is serious about participating.  You may just have a ton of people on your hands that are window-shoppers and never bothered to unregister when they decided that the MOOC was not for them.  TAs seem a bit of an overkill.  What would be better would be a core "team" of subject experts, each tackling a week of the upcoming topics, and they all contribute to the kindling, along with other MKO peers.
 Comments

Badge MOOC Challenge 6: Building a Successful Badge System

 Permalink
Trust Network Badge
Well, this is it!  We are in the final week of  the #OpenBadgesMOOC, and this is the last post (for badge purposes anyway) from Mozilla's #OpenBadgesMOOC. As with previous blog posts in this series I am brainstorming about including badges in an #ESLMOOC that I am thinking of designing, developing, implementing and them studying for a potential PhD.  With this week's materials we are tackling the Badge System.  Since this brainstorming is all theoretical and planning, I will most likely have some assumptions that underlie this brainstorming session.  As with previous weeks, we have the prompt (from the MOOC site) followed by my brainstorming on the topic.

Prompt:
Challenge Assignment 6: Building a Successful Badge System
  • Verification
  • Authentication
In order for Open Badges to gain full acceptance, extra precautions must be in place to ensure transparency in and confidence about the badging process.  This involves authenticating that the badge holder is indeed the one who earned the badge, and that the badges displayed by a badge holder are verified as coming from an authorized source. These “official” steps can be technologically addressed in your badge system implementation. In addition, the open badge ecosystem is evolving to include reputation systems evaluating learning providers and assessors as well as endorsements offered by employers and standards organizations.

If you’ve gotten to this challenge, you’ve invested an enormous amount of thought and work in the prior five challenges. Don’t let it go to waste: Now is the time to actually implement a badge system. Draft a project plan, find collaborators, and see it through.
  1. What stakeholders at your institution or company need to review/approve the badge system? Do you have the right materials and explanations to help them make informed decisions?
  2. Can the badge system be used to address existing goals and thereby strengthen its purposes?
  3. Are peers or partners or consortial institutions implementing badge systems, perhaps providing collaboration opportunities? Do competitive pressures strengthen your reasons for implementing a badge system?
  4. Are your learners demanding more authentic and targeted learning opportunities? Are you delivering the value your learners expect?
  5. What are the next steps for your badge system? What other resources do you need? Build a roadmap/workplan for this badge system.

BRAINSTORMING for this week:
So, coming to the end here, this may actually be one of the easier challenges to respond to. The nice thing about building something from the ground up, and working on a MOOC of my own interests is that there is no institution to review or approve my badge system. This means that I can "bake in" badges into the course and that they won't be an afterthought.  I suppose that if I want some institutional resources I may have to seek some buy-in from the institution, but since they seem eager to experiment with badges it may not be such a hard sell.

Since the MOOC has not yet begun its development cycle (I am currently in an Analysis stage), this also lends itself to barges being tied into goals, meaningfully, as goals for #ESLMOOC get decided on, and as meaningful MOOC assessments are thought out for this MOOC. Thus, I am hoping that baking in a badge system will strengthen the MOOC outcomes.

In terms of peers, if this #ESLMOOC is dissertation material, I guess I have to do most of the original preparation alone, but it would be great to get input for the design and implementation of this MOOC from instructors who do ESL as their day-job. Going forward, it would also be interesting to connect with others who are interested in examining the efficacy of badges for MOOCs, either at a regional or international level. There are no competitive pressures to implement a badge, unless you think of research and publishing a competitive pressure. I just tend to think of a badge system as a "good idea," that can help motivate learners, and give them a tangible item that shows the fruits of their academic labor.

In terms of the learners for the #ESLMOOC, I'd have to go out on a limb and make some initial assumptions about them.  In the first blog post I described a couple of sample students. Now, since the MOOC is international, I foresee that the learners will have many different motivations for joining and many different expectations for completing the MOOC. They probably have expectations of themselves about how often they participate in the course.  As we've seen in many different posts on InsideHigherEd.com, the Chronicle.com as well as many blogs from various MOOC participants, we see that people join and continue (or discontinue) their participation in a MOOC for many reasons, and some don't have to do with the MOOC itself.  The interesting nut to crack will be a real proper tracking of learners in MOOCs, and getting feedback from people who want to continue, but the barrier to continuation is just a little too much to overcome.  If the barrier is course related, it will be interesting to see how a MOOC can be modified to help those learners.  Badges may end up being something that keeps people going.  I know that, for me, badges can be motivating to continue the course, if there is something else in the course that is of interest to me as a learner.

As far as delivering value goes, while a MOOC is free, learners to pay for it by putting in the time, effort and brainpower to complete the MOOC, so they probably want to see something back from it.  Realistically speaking, in a 6-8 week MOOC someone's language isn't going to improve 100% in all areas, so the MOOC can provide value to the learners, and so can a badge system, but at the end of the day, the value that learners get out of the MOOC will in part be based on how much they put into it.  I know from personal experience MOOCing, that what I get out of MOOCs greatly depends on how much I engage in them.  I expect a language MOOC to be even more so.

So, the next steps in this project are as follows:  I expect to do a whole lot of research into what the course ought to be (in other words the Analysis phase of instructional design).  Concurrently, I plan on researching a research on MOOCs (the little that exists), but also educational research on discussions forums, twitter, wikis, blogs, social bookmarking and so on to see what current research says about these things.  I am a few years behind on this at this point :-).   Once this is done, I will begin with assessments in mind, and that also means badges.  I will most likely be using Purdue's Open Passport platform since it ties into Backpack, and it's ridiculously easy to create badges on that system.  Considering I am not in a PhD program yet, I am not in any rush to complete this project tomorrow.  I am thinking about this as a long term project (next 18-24 months) since I am working on it on my own, on my spare time. I think that as I am developing assessments and badges I will seek the feedback from peers who are ESL instructors to see what they think.  I am wondering if anyone teaches ESL online - that would be of immense help in terms of sounding boards.

So, finally, let's end with a user story. Remember Stella D'Agostino? We met Stella in our first challenge blog post.  She is an Italian Professor at the University of Milan where the language of instruction is slowly changing from Italian to English.  She had completed her education in English, so she wanted to not be stuck in unnecessary classes.  She was signed up for #ESLMOOC to see what all the fuss was about.  She saw that badges would be something that would be available to learners who have demonstrated proficiency in the language.

As she looked at the modules for the six week #ESLMOOC she saw that she didn't really need to learn much from certain modules (2/3 the MOOC to be precise), so she submitted her assessments for those early, so that the ESL instructors could assess her work.  She passed those with flying colors and got her badges of mastery for those weeks, however her involvement did not end there. Since she had mastered those weeks early she became a peer mentor (another badge she could earn) for those weeks helping fellow classmates, some of whom were at the same University as she was.  For the weeks that she hasn't mastered before, she was able to participate fully, submit evaluation materials and get mastery badges for those as well.  At the end, she not only had all mastery badges for the MOOC, but she also had some additional (let's call them "rare") badges that showed her ability to be a peer mentor.  This helped her with work in that she had additional opportunities at work to excel and help peers in an offline (non MOOC) way.

OK, this is a bit down the road - but I think that these future (post-badge implementation) stories should be inspirational.  It's not just enough to get a badge - is it? :-)

Your thoughts?






 Comments

Badge MOOC Challenge 5: Authentic Assessment and Evidence for a Badge Ecosystem

 Permalink
The real badge?
Alright!  The penultimate week in #OpenBadgesMOOC, brought to us by Mozilla and Coursesites.  Continuing this week is the exploration of how badges can be incorporated into this #ESLMOOC that I've been thinking about designing, implementing, and hopefully collecting some data for some interesting analysis.  Dissertation-wise it seems like a good topic, but considering the University I was considering applying to has suspended operations due to Austerity Measures in Greece...well, I guess I'll keep looking at other programs while Greece sorts its issues out :-)

So, as with previous Badge Challenges, the Prompt comes before my brainstorming.

Prompt:
Challenge Assignment 5: Authentic Assessment and Evidence for a Badge Ecosystem
Badge system design acknowledges that not all learners are the same, not all learning situations are alike, and not all ways of ascertaining learning accomplishments and skills attainment are the same. Badges offer learners cum job seekers not just more flexibility in how they learn but also in how they prove that they have the skills/competencies represented by a badge.

Assessors have a multi-partite role in the badge ecosystem. They must devise strategies that, as much as possible, push assessment activities into the world where actual performance will occur. They must validate both performance and performer. They must provide a robust set of metadata for each assessment that communicates that validation in total.

In a healthy badge ecosystem, learners demonstrate their competencies in authentic learning environments, capture evidence of their achievements, and have valid assessment to back up the earned badge. Just as badges open the field for innovative learning providers, they simultaneously stimulate rethinking how learning opportunities are provided and assessed. Based on the badge system you described in the prior challenge, describe the learning and assessment frameworks that are needed.
  1. How can learners’ needs best be addressed?
  2. Will traditional learning contexts and methodologies suffice, or can/should they be reworked? Are entirely new methodologies needed? Will new/different staff be required?
  3. How do competencies map to the learning activities and assessments?
  4. What types of evidence and assessment are valued and/or required by employers?
  5. Do the methodologies support the validation frameworks the learning provider needs?
  6. Write one or more “after badges” user stories depicting the value of the badge ecosystem for learner personas. What opportunities do badges provide for your personas? What challenges must be overcome in order to optimize the value of badges? Who do they share badges with? What goals do they have for using their badges?

BRAINSTORMING for this week:
It was interesting to see Kyle Bowen at the weekly live session this week in #OpenBadgesMOOC.  I have met him in person at Campus Technology conferences and I have seen some of what Purdue is doing, which is pretty nifty!  After one of his presentations I created an account on Open Passport to mess around with badge creation.  This was a pretty easy system to create badges - if you know what badges you want to create that is!  Just like ePortfolios, the technology is not (generally) the issue, but rather the underlying objectives and learning that you are attempting to assess.

For this week I think that we're back to familiar waters, with fewer assumptions than previous weeks since some decisions in the last couple of weeks were above my "paygrade."  I do think that since I am thinking about Badges in relation to MOOCs it is a little easier. Applying badge considerations to MOOCs is easier, as far as I am concerned, because, in theory, you aren't tied down by the institutional baggage of how assessment has been done for the last decade or century :-)

When thinking of learners, and learner's needs, it's best to do some preliminary needs analysis, however this is hard even when thinking about "traditional" higher education courses, let alone a MOOC where you potentially have learners from all over the world with many, and competing, interests.  That said, when designing the MOOC it's best to think about the target demographic for your MOOC and then make sure that those people who sign up are the learners for which the MOOC is designed. I do believe that in the weeks leading up to the MOOC, when learners sign up, they can potentially fill out a survey that allows the MOOC facilitators to keep certain things in mind as the MOOC is facilitated, and as additional or supplemental material is discovered, evaluated, and rolled into the MOOC. This should help address the needs of specific learners who are signed up for the MOOC.  The big thing to think about is whether or not facilitators and designers of the MOOC keep addressing the needs to non-participating students in the MOOC, in other words, students who signed up, and might be reading along, but are not visibly participating in some way, shape, or form.

As far as staffing and methodologies are concerned, I would say that staff is definitely needed. A MOOC isn't a one-man show, but rather it's a team effort to design, develop, and implement the MOOC.  In my initial thinking, I am thinking of designing and developing the MOOC on my own, simply because it's probably something that is going to be part of a dissertation.  That said, when implementation-time comes along, I would like to recruit some ESL instructors from my institution, maybe on a volunteer basis, or through some sort of grant support to help pay them a small stipend, to help facilitate the course with me and to provide resources for course material evaluation, learner outreach, and in the end, of the trial group - some sort of evaluation support.

As far as methodologies go, as with most situations where instruction changes from one medium to another, there will be some change in methodology because there isn't a 1:1 correspondence between existing face to face language learning courses and the MOOC format.  I suppose you could shoe-horn the face to face approach into a MOOC, but it just won't be successful. The commonality between the MOOC approach to language teaching and existing face to face teaching is that I plan on using a Communicative Language Teaching (CLT) framework for designing, developing and implementing this #ESLMOOC. While CLT can be the underlying method in both approaches, the tools users, and specific methodological approaches will vary between the two mediums. Thus, I foresee that in a MOOC environment we will be using a variety of  Web 2.0 applications like Blogs, Twitter, YouTube/Vimeo videos that learners can use to process and engage with the material, but also to use in the assessment of the learner's acquired or improved skills as the MOOC progresses. The methodology chosen to design and run this MOOC should  be supportive of the frameworks for validating learning.

As far as competencies and their mappings to learning activities and assessments go, that's a subject to be determined - mostly because the competencies themselves need to be determined.  Right now, the CEFR has broad levels of competence.  In order to make those broad levels more meaningful, but also as a way to map badges for those broad competencies, supporting competencies need devising. At the very least there are three supporting competencies, including hearing, speaking and writing. Those in turn would need some supporting competencies of their own.  All of those would have considerations for activities and assessments.  As far as assessments go, I don't plan on having many assessments. There will be two levels of assessment for #ESLMOOC.  The first level would be basic assessment where the assessment is relatively easy and it doesn't need many assessors, or lots of assessor hours, to complete. This is probably where badges fit in.  The second level of assessments is a more in-depth assessment of learners, perhaps in a smaller cohort, that will serve as part of the data gathering process for the PhD. If #ESLMOOC runs again, and if the second stage isn't as time consuming as I am envisioning it now, then those will be "badge-ified" as well.

When thinking about employers, going back to the original use case of this #ESLMOOC to prepare instructors of higher education in non-North American institution to teach their subjects in English, I would say that there are two broad elements of assessment that they need. There may be more, but those are subject to some sort of needs analysis.  The two broad elements I can see at the moment are:

  • Clarity of speaking (speaking with less of an accent, being more comprehensible)
  • Clarity of written feedback to students

Finally, in terms of thinking of some after badges stories let's revisit some of the personas and their colleagues.

We have Professor Tomas.  He is teaching at the University of Milan. Since he is going to be required to soon be teaching in English for his graduate courses, he decides to take part in #ESLMOOC to hone his skills in the English Language.  Also, since he was willing to be a guinea pig for the researchers, he was assessed more than other learners in the MOOC.  At the end of the MOOC he "graduated" the course with a collections of competency based badges, including one that gave him credentials as someone who could communicate orally at the C2 level (CEFR framework).  With this set of badges he is able to pass the requirements at his own institution since his institution signed onto #ESLMOOC as a sponsor and was able to vet some of the outcomes of the MOOC.  Professor Tomas' only issue is that he also moonlights at other institutions, and since they weren't part of the conceptualization process of #ESLMOOC, and they are no familiar with badges, they are not sure what to make of his accomplishments.  Still, he has published his badges from #ESLMOOC on a variety of spaces, including his LinkedIn account, his Academia.edu account, and on his professional website via an embed code that he got from his backpack.  For him, the overall goal for this particular set of badges is to certify his English skills.  Using digital badges he can take care of the immediate needs of his home institution to demonstrate competency in English, but he can also advertise his skills via badges, as a way to also get noticed for any part-time work in institutions abroad, including the US.

That's all for badges for this week.  Thoughts?  One more week to go in #OpenBadgesMOOC
 Comments