Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

HyFlex is not what we need (for Fall 2020)

HyFlex (Hybrid Flexible) is a way of designing courses for (what I call) ultimate flexibility.  It takes both ends of the teaching spectrum, fully face-to-face, and fully online-asynchronous and it bridges the gap.  Back in the day, I learned about this model of course design by taking an OLC workshop with Brian himself, but you can learn more about the model in his free ebook.  I liked the model at the time (and I still do), because it gave more options to learners in the ways they wanted to participate in the course. They could come to class, they could participate online synchronously, and they could just be asynchronous, or a mix of any one of those depending on the week.

Quite a few people on twitter, including @karenraycosta, were pondering whether they don't like HyFlex (in general), or the implementations of HyFlex that we are seeing. Heck, It seems like HyFlex has become the white label flex model for universities because some of them are creating their own brands of flex!🙄. I wonder what marketing geniuses came up with that.  Anyway, colleagues and I have been trying to flex our learning for the last few years as a trial, with mixed results.  The main issue that comes out a lot is a critical mass of students, with a secondary issue of staffing.  In "pure" modality (full F2F or full-asynch) you need to have a critical mass of learners to be able to engage in constructivist learning.  If lectures are your thing and you expect people to sit down, shut up, and listen, then it works just fine.  However, for the rest of us who want to build learner connections and interactivity in the classroom we need a minimum amount of students, and we need to have a sense of how many there will be so we can plan activities.  An activity for 20 people won't necessarily scale down to 2 people.  The same thing is true in asynch, if most people are F2F, writing in the forums might feel like speaking to an empty room.

Things become more complicated if you want to create a sync session online and merge that with a F2F meeting.  The instructor becomes not only an instructor but a producer.  They need to manage the tech, ensure that everyone on-site has devices that they can beam the online folks in (zoom, adobe connect, etc.) to work in groups, for team presentations you gotta work wizardry to ensure that all people are well represented and the tech works. I've seen this type of producing happen in distance education classrooms of old where people connected 2 physical classrooms via P2P connections, and each site had a producer to manage the cameras that connected the students from one classroom to another, and the remote classroom had a tutor. In total there were 4 people to make this happen for a class of 40. HyFlex (the way it's implemented) expects one person to do this: the instructor.

While I think HyFlex is an interesting model to pursue, I think it's something to pursue for large class enrollments (think classes of 80 or more students), or multi-section team-taught courses (ENGL 101 for example that might have multiple sections taught by many people).  HyFlex isn't good for a "regular" class size class (regular defined as 12-20), because you need to design and plan for possibilities that might never occur.  This makes course creation more costly, and course maintenance an issue, which falls upon one person: the instructor.  Considering that the majority of courses are taught by adjuncts these days - who aren't paid well - this also becomes an issue of academic labor.  Think about it (and use my university as an example): 

  • One course is compensated as 10 hours of work per week (at around $5000, or $33/hour)
    • Assume 2 hours per week prep time (really bare minimum here, assuming all course design is complete and the instructor doesn't have to worry about that). That leaves 8 hours
    • 3 hours of that is "face time" each week.  That leaves 5 hours
    • 2 hours per week are office hours. That leaves 3 hours.
    • Assume 3 hours per week that you are spending engaging in things like forums, mentoring, reading learner journals, and responding back to them (an equal amount of time spent as on-campus). You are left with no paid hours to devote.
  • So what's left out?
    • What if you need to do more than 2 hours/week of student conferencing? Do you take a pay-cut? or do you say "first come first serve, sorry!" (not very student-friendly!)
    • Who grades and gives feedback for papers and exams?  Are they all automated?  That's not really good pedagogy
    • When does professional development take place to be able to use all the tech required for HyFlex?  Is this paid or not?
  • Parking on my campus costs $15 per day, so $225 per semester if you are only teaching one-day per week. If you are unlucky and teach 3 days per week (MWF) or five days per week (MTuWThF), then your parking costs are $675 and $1125 respectively.
    • This makes your compensation per course:
      • $4475 ($31/hour) - one-day teaching schedule
      • $4325 ($28/hour) - three-day teaching schedule
      • $3875 ($25/hour) - five-day teaching schedule
    • While these costs are incurred for people teaching on-campus anyway, when they are off-campus they are not working, however, with HyFlex they still have their online obligations.
  • There is a commuting cost associated with going to/from home. Those hours are not compensated or accounted for.

I think HyFlex can work, but not for everything.  Furthermore, for fall 2020 it puts the lives of faculty in danger because faculty would have to come in to teach on-campus.  The "flex" option seems to only be available to learners.  When Brian Beatty originally proposed the HyFlex model (from what I remember of my OLC workshop), the flex was a two-way street.  The faculty member could also say "well, this week we're online because of obligations I have" - but the flex proposed by colleges and universities doesn't seem to include this two-way flex.

Anyway- that's what I have to say about HyFlex.   How about you? 

Educational assumptions discussed (Part II)

Well, here we are, part II of educational assumptions.  That last blog post was getting long, so here we are! These are still some ideas about things I jotted down in the margins, highlighted, or otherwise reacted to when reading a recent research article on Open Praxis by fellow MOOC researchers France and Jenny. Despite my issues and concerns with the article, it's still worth a read so that we can discuss the  things that came up in it.  In this blog post I am wrapping up the responses to some educational assumptions (or myths, depending on where you stand). 

Courses are not experimental
One of the views that came across in the article was that Cormier, as convener of Rhizo14, was experimenting on us learners. This seems to bring up two mental images.  The first is that we, as learners and participants, were in some sort of experiment, like the ones that IRBs warn you about (see Milgram for example), and that courses, whether MOOC or not, aren't by their nature experimental to begin with.  A course is never complete in some ways.  Each time we run our course we iterate though different designs based on the feedback we've gotten from our learners.  This is both explicit feedback through course evaluations and instruments like that, and through feedback like observing many learners not do very well in a particular test item.  As designers, and instructors, we always experiment in our courses to see what works best, or at least to see what might work best.  We experiment with the way we provide feedback, with the language we use in class, with the level of familiarity between us and out learners, with the course materials, with the presentation of those materials, with sequencing...and the list goes on!

At its core I would argue that teaching is experimental.  This is how we advance knowledge of what it means to learn in a variety of contexts.  If teaching we're experimental we would all, potentially, be doing the same thing in our courses. The reality is that we don't. To frame Rhizo14 as experimental, in this sense, not only connotes something nefarious going on, but also it denies the reality that as teachers we strive to improve our practice so that we can then in turn make changes that work for our learners and for the intended learning outcomes. How does one accomplish this?  Through experimentation - over or covert.  In the course that I teach (traditional online course), I've experimented with podcasting (and doing it a few different ways), different ways of engaging with learners in the course forums, framing assignments in different ways, and exploring badges as alternative credentials.  Do learners know that I am experimenting? Sometimes they do, and they can opt-in (example: issuing of badges last spring), and sometimes they don't and they think that this is a normal part of class (example: doing podcasts).  We all experiment - the key is to abide ethical teaching principles.

Learners are not complicit in educational acts
One of the things that didn't sit well with me about  the article (and raised a few red flags on the research end of things) were the quotations used by some informants to the survey.  I should say that it's not the quotation that bothers me, but how the overall narrative is framed in the article, and the experiences I had in the facebook group of Rhizo14. One quote is as follows:
Some questioned the lack of content in the course and felt that it lacked depth and theoretical discussion. For these participants the rhizome is “A pernicious, pervasive weed, rooted in a lot of dirt and “SH***””; “ . . .a ‘thug’ and can be very badly behaved”; “Part of one big family/ plant—joined at the hip”; “Clones of the “same damn plant.” 
 This level of vitriol is fine to express when someone is legitimately silenced.  However I do not recall seeing any serious, prolonged, discussion about this issue in the forums. So two possibilities I can think of is that either this quote is misquoted or misunderstood, or the frustrated learner didn't bother engaging with the community.  This to me is a problem.  Learners are supposed to be complicit in their own learning.  There needs to be a willingness to learn and to experiment with a specific form of learning, and thus engage with the community.  If the course name is "the community is the curriculum" what do you expect?  If the reality didn't meet your expectations it's up to you to  engage and make it happen.  After all, why do we keep talking about empowering the learner if the learners won't stand up and take charge?

In any learning environment, be it MOOC or traditional learning, learners ought to be part of their own learning.  They need to actively engage with peers and instructors to learn whatever is on the table to learn.  Otherwise, in my opinion, they don't have a leg to stand-on when they complain vitriolically like this.

In a course where the community is the curriculum, we gotta stick to the point
This connects with the previous myth.  Personally I didn't want to discuss Deleuze and Gautarri. Some did, so more power to them!  Don't get me wrong, at some point I am interested in reading what they have to say, but Rhizomatic Learning didn't originate with them - so why bother at this point in time when my time is short? The nice thing about MOOCs is that communities can form to discuss smaller parts that interest them.  There is no need to "stick to the point" and just explore the areas that are pre-defined by the course.  An an Open Course participants can go off on a tangent if they so wish.   In a traditional course, where an institution pays someone to make sure certain learning objectives are covered, then sure - OK.  In a MOOC, however, I do not think that this rigidity is necessary.

A MOOC needs learning objectives!
Perhaps I will be seen as heretical in my instructional design circles, but I disagree :)  cMOOCs don't necessarily need learning objectives, at least ones that are really rigid.  In a traditional course, yes we need them because that is the structure we are working under.  You have a curriculum that ties together, you have people working in unison to deliver that curriculum and to scaffold people into new roles as members of a certain profession.  Learning objectives are necessary.  It helps the instructional design process in those cases.

In the early days of MOOCs (for me that was 2011) I was actually astonished that MOOCs offered by big names in MOOCs didn't have learning objectives.  WTH?!  How can this be?  This is BS I though to myself!  Since those days I've pulled a 180 on this topic and at the end of the day, for me, it depends a lot on whether control rests with the conveners (as is with traditional courses) or the learners.  If control rests with the conveners, and you only get credit for what they tell you - then learning objectives matter.  If control rests with the learners, and the course is much more flexible as to what "completeness" means - then learning objectives, set by the convener, are almost meaningless.  The learners need to set their own learning objectives and pursue them.

Alright - so I think that takes care of all of the comments, thoughts, and reactions I had with this article.  I guess now it is time to make a dent on my Pocket reading :)

You've been punk'd! However, that was an educational experience

It's now the end of Teaching Goes Massive: New Skills Required (aka #massiveteaching) on coursera. Well, almost, we still have a couple of days left. I guess that the lesson here is that we were (the "learners") were punk'd† by Paul-Olivier Dehaye of the Univerisity of Zurich.

After that last blog post (and subsequent pickup of the post by George Siemens and others) Inside Higher Education and the Chronicle wanted to chat with me about this MOOC experience. I only had time for one of the two before things went to press, so my 15 minutes of fame went to IHE.  Both IHE and the Chronicle have written about the topic, and have received some information from Coursera on the incident.  Others have also written about it (see here, here and here)

It's surprising to me that the University, and Coursera, waited until after this thing was a big issue in order to respond in the class, and clue people into what was happening. On Wednesday (day 3 of Week 3/3 of the course) the University posted the following:
Dear Coursera Students,
Prof. Dehaye, instructor of the Coursera course “Massiv Teaching - New Skills needed”, has deleted content during the course as part of his pedagogical concept in order to get more students actively engage in the course forum. In the course of the events confused students contacted Coursera directly, as they assumed a technical problem being the reason for the disappearing of course material.
Unfortunately, Prof. Dehaye had not previously informed Coursera of this part of his pedagocial approach: Deleting course material is not compatible with Coursera’s course concept, where students all over the globe decide when they want to watch a particular course video. Prof. Dehaye’s course included experimental teaching aspects which led to further confusion among students.
Coursera and the University of Zurich decided on Friday, July 3rd, to reinstall the course’s full content and paused editing privileges of the instructor until final clarification on the issue would be obtained.
The course is now back on track, and will conclude as planned, with the final assessment that is due this week. Once again, our apologies for the confusion and thank you for your patience.
- The University of Zurich Team

This response is quite interesting.  First it shows that Coursera has a "concept" of what MOOC teaching and learning looks like, and they are packaging it with their LMS. I guess the only sanctioned way to design and teach on Coursera is the Coursera way. This to me is quite problematic from a pedagogical stance; and I am sure others have written about this before and will continue to write about it. It's also problematic because it has shown that neither the University, nor the provider (Coursera) have some sort of way to address potential issues that crop up in these courses, especially the unexpected issues like the professor going rogue and deleting everything, or the professor just leaving the course and not telling anyone about it.

Now, we all knew that there was an experimental element to the class, Paul didn't hide it, but we weren't clued into the experiment. The experiment happened to us instead of having the learner play a controlling role in the experiment. If you know that you are supposed to set your own path, from the onset of the experiment, then you set your own path. If you think (given the Coursera model) that things function a certain way, when things don't, then you assume that they are broken and someone in a position of authority (i.e. someone with editing rights on the platform) needs to fix it so that we can all move on.

There have been a number of comments going around on research ethics of this whole situation. Personally I feel conflicted about this. I honestly don't think there was an ethical violation here on Paul's part. It doesn't stop me from feeling like this is a major facepalm moment, but I don't think he violated any ethics.  We all signed up willingly, and voluntarily (I hope!) for the Coursera platform, and then we again signed up for this "course." What makes me conflicted about this is the inevitable feeling of bait and switch that many people experienced. They were sold one thing, and they experienced something totally different.  It doesn't matter that this was a free course (a comment in the Chronicle article), so who the heck cares, the course was not free.  People spent their time and effort in an earnest attempt to engage with their fellow learners and what they got was "technical errors" (which were planned), confusion, and manipulation.

The result of this was really pedagogically ineffective. Paul's attempt really didn't address any of the stated learning objectives (saved on Scribd for those who aren't in the course), and it created a lot of ill will toward him, and the medium. There are better ways to help "disorient" educators to help them find their footing in the online landscape (the part in italics is one of the course "goals" - I guess it was written in invisible ink...). Anyone who has participated in a cMOOC can tell you that there are better ways. Each cMOOC is a unique, and initially provides for a disorienting experience until you get your footing, especially when you are new to MOOCs.  This is accomplished without pulling the rug under you, and by helping to create learner communities, learner initiated communities without the drama involved in #massiveteaching. This language of finding one's footing in the online world also seems to imply that people don't yet have a footing on the online landscape which in my opinion is a gross misunderstanding of the situation. What will probably happen is that participants in #massiveteaching will carry this experience forward and associate such shenanigans with Paul and possibly with online learning in general. Paul's tone-deafness to what the learners are saying, and the fact that he doesn't understand anything about existing literacies of learners (computer, critical, network, and so on) means that Paul has done a disservice to online learning everywhere.

In the IHE post Paul Dehaye is quoted as saying:
MOOCs can be used to enhance privacy, or really destroy it, I want to fight scientifically for the idea, yet teach, and I have signed contracts, which no one asks me about.... I am in a bind. Who do I tell about my project? My students? But this idea of the #FacebookExperiment is in itself dangerous, very dangerous. People react to it and express more emotions, which can be further mined.
OK, so I guess he was trying to make a statement about the dangers of the MOOC providers, and as Siemens points out they lack any governance structure and are leeching on the teaching functions of academic institutions. I'd love to say to Paul "Welcome to 2012."  Again, anyone involved with MOOCs, and is critical of the discourse around xMOOCs,  has known this for a while. Paul just showcased his ignorance, and further points us to the fact that this stunt was a bad way to go about raising awareness of the issues around xMOOCs. He lost more people than he gained. People will remember this stunt more than the stunt's message. It's a shame, because it is something we ought to be discussing.  Back in 2011 and 2012 I was really excited about MOOC research. I still am, but we only really see published research from cMOOCs.  Courses running on Coursera, EdX, and Udacity are like black holes. We rarely see any research coming out, and what does come out seems quite sanitized, like a press release.  What does happen to the massive data collected from student actions in a MOOC? Who controls this data? Who has access to it? Can we use it for pedagogically meaningful reasons?

Finally, in the course, I guess some people drank the cool-aid because this showed up in the discussion forums of the course:
In retrospect I think this MOOC was a brilliant example of Collaborative Problem Solving!! The challenge for me now is to think of problems that would engage my students as meaningfully as Paul managed to engage us. Real learning happened (whether we liked it or not), real emotions were felt and a huge bunch of strangers came together, offered our skills and helped the group work through the problem of learning online. Well done Paul and Coursera on opening my eyes to a whole new world of possibilities for learning!

Which was followed by a reply:
yes __NAME__, we already addressed the situation with CPS, if you looked at the FB site and the Google group of this course we took this course to a different level.. I am sure now all the members who took those have lost their motivation, This was infact my assignment asnwer in the Assesentment and teaching in 21st Centuray.. 
Part of me is wondering if this is falling in hook-line-and-sinker, or if this is making lemons out of lemonade, or if it's a coping mechanism on the part of some learners, a sense of getting something out of a giant disappointment. I think that there is something fundamentally wrong about claiming that you got something out of the course by going to an off-course resource to get something out of that course.  Now don't get me wrong, I've been in a lot of learning situations, in MOOCs, where a lot of learning occured on Social Media Sites like Facebook (Rhizo14 is a good example). However it was the learner's choice to go there (as was the learner's choice to go to the G+ group for Rhizo14 in lieu or in addition to Facebook).  It wasn't a forced path like was this case.   The learning communities of cMOOCs are created by learners, and the existence of such communities is aggregated through the network so that others may join if they wish. Learning on SMS is a grassroots behavior in cMOOCs.  In this case it seemed like the pre-determined path that Paul had set out, but only if the labrats (learners) went in the "appropriate" direction. While ethically it might be questionable, I find it pedagogically dishonest.

To wrap things up, the final "assignment" has been released, and it seemes a bit disingenuous to me, given everything that transpired.  It was a peer graded essay with the following three questions:
  1. Please state briefly what you knew or thought about MOOCs when entering the course. It might help to give some limited information about yourself (are you a student? an instructor? in what context?). This question is asked so your peers can assess your progress in the course.
  2. Please state briefly what you hoped to learn in this course. Again, this question is asked so your peers can assess your progress in the course.
  3. Did the student provide information about what they expected to learn in this course?

With regard to this "Assignment" I feel rather cynical on all fronts. On the one hand it feels like this is just another data-gathering stunt. So Paul ran his "experiment" and now he is collecting data to see what the learners say. The learners that didn't un-enroll from the course that is.  On the other hand, even if this is an earnest attempt to have learners introspect on this whole process, the attempt falls really flat on its face because this data is tainted.  The questions don't address anything that happened in the course. It feels like these were written with the original learning objectives in mind, and as such it reduces this final exercise into a farce. It is a farce that does not respect the learners, and it is a farce of the educational process.

Who is to blame for this farce? I don't think that the blame lies solely on one entity (and calls for the Paul's head are a bit over the top in my opinion). Coursera, University of Zurich and the Professor are equally to blame, as are the circumstances (i.e. lack of formal procedures). This was ill-conceived, there was apparently no appropriate or substantive communication between these three parties, no apparent oversight by some sort of curriculum committee at the University of Zurich, little-to-no oversight from Coursera (but they would love to charge you premium for verified certificates). At the end of the day, Paul, in my book, at best is a misguided academic, and at worst he is a troll. I think we have a long way ahead of us to make sure that such abuses of learner trust are not repeated again.

    † Punk'd reference explanation for those who aren't aware of this show: wikipedia link.
     Comments (5)

    Blendkit, I am flipping the tables on you!

    BlendKit, prepare to have your mind blown!

    OK, I am a exaggerating a bit, but I am going to come to this MOOC from a non-traditional approach. I've been thinking about the DIY activities, and I have to say that the DYI tools (4th column, DIY project deliverables) are pretty nifty; not just for blended learning, but also for instructional design purposes in general. Now, I don't think I will have a ton of time to complete all DIY deliverables; and, considering that the course I am working with is only a proposal which may, or may not become a full course†, I think it's best to not spend a whole lot of time on a specific course until it's approved to run (given other competing time issues).

    In any case, my guinea pig course is a course on Mobile Learning that I developed (at least in syllabus form) in the past year. The funny thing is that I originally conceived of this course as a blended course; however due to business factors (more online students interested in taking it as compared to local students), I decided to address the course to online learners.  That said, I think it would be interesting to take an online course and turn it into a blended course. Thus not asking which on-campus activities can be undertaken online, but rather which online activities can be done in person? And, how can in-person interactions improve the online course.

    Looking at the mix map, I would say that the final presentations for the course (existing item in syllabus) as well as intermediate check-in presentations for the site surveys and the technology white papers (not on syllabus) would be good face-to-face elements.  The technology white papers can have an element of show-and-tell whereby the students who wrote a white paper on a specific technology that can be used for mLearning can demonstrate in class and have a learning activity planned around this.  For the site surveys, they can present what they have found thus far for their sites that can benefit from some form of mLearning intervention and get feedback from their peers so that they can uncover things in subsequent surveys that they might have missed, or not though of.

    I am trying to avoid the "course and a half" syndrome that the Sloan-C folks warned us about, so if it seems too much for one course, feel free to chime in!

    Finally, one more thing that would (I think) benefit from face to face interaction is doing the site survey as a team of two or a small group of  three people. This would be a course activity modification since the site surveys are individual activities in the current syllabus.  That said, I think that if small teams undertook the site surveys, they would potentially uncover more interesting information about their sites, the learners and the information to be learned than if they were working on their own.

    One last word: even though I won't use the course blueprint for this activity, I think the course blueprint is a fantastic tool because it maps course description, to course objectives, to activities in the course. My syllabus already contains all that info, and it's a standard requirement to have these items in the syllabus from our department of Graduate Studies.

    So, the question is this: what will you do for your blended course?

    † while writing this the "I am only a bill" schoolhouse rock came to mind ;-)