Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

Analytics, and usage in Higher Education

 Permalink
It's week 4 of #cfhe12 so it must be time for Big Data and Analytics as the topic of discussion. It's interesting coming back to this topic of discussion because it was the topic of the first MOOC I took part in, LAK11, and it's a topic I've been thinking (or at least keeping on the back burner) since I was in business school. On of th things to keep in mind when talking about Analytics is that there are quite a few definitions out there, so, when talking about learning Analytics it is important to define what we aim to get out of our discussion about Analytics and how we wish to employ the potential insight that we get from this data.

There are two topics that have recently come up in my neck of the woods: knowing what sort of data one can get from the various campus systems, and knowing what it means (and accurately representing what the data tells us). First, it's important to know what sort of data you can get out of your systems, like the LMS. As I've written elsewhere, systems are designed with certain design parameters and certain underlying assumptions in mind. This, of course, affects pedagogy, but it also affects what sort of data the system keeps track of. If the system doesn't currently keep track of certain data you need, don't dwell on it. Put in a request to your system vendor and see what happens, don't say "we don't have this data? Well that stupid? Why not?" The "why not" does not Matt, what matters is how to move on from here. The other thing to keep in mind is not to make assumptions about what systems track and how they do it. This can get you, and your organization, in a pickle. You should ask your vendors what they track and what they don't so you can plan accordingly.

The second thing that needs really careful consideration of what the data actually means! Over the past 10 years I've worked in a variety of departments on campus and one thing seems clear: data collected is with poorly analyzed and understood; or departments are shedding the light they want to shed on their data they've collected in order to make their department the "hero" of this yeqr's annual report, or to get as many resources as they can for their department. This second part is a direct cause (I think) of th siloization and siloed nature of academia.

With more than 4 years of business intelligence and Analytics in my head I am not sure what to add. What do you all think? What would your elevator pitch be for learning Analytics?

Figure from: http://www.educause.edu/ero/article/penetrating-fog-analytics-learning-and-education

 Comments

Coursera mLearning fail

 Permalink
The other day, seeing that there were a couple of videos in the HCI that were available. Since I didn't have time to watch them during lunch, and as established coursera has no offline viewing for their courses, I decided to try my luck with the iPhone while commuting.

Since I do use coursera, and I do watch videos on my iPad when I am at home from time to time (on wifi), it would make sense that I would be able to do the same on my iPhone. Thus with 20 minutes left in my commute, and two 17 minute videos to watch, it seemed like a no-brainer. Well, the image I got was the image on he right, in plain English: video not playable.

What gives? This can't possibly be a technology constraint, so it must be a course design and delivery constraint. It reminds me of the continuing discussion (well, a series of post in actuality) thinking about the constraints that LMS/CMS design place on teaching and learning, based on the assumptions that go into designing an LMS. It seems to me that coursera designers (platform designers) envisioned learners with butts in seats, in front of computers, as if they were in some sort of virtual lecture. The design consideration doesn't seem to be inclusive of other ways of consuming content; and yes, learners consume content ;-) we need to get rid of this negative association that surrounds consumption (more on this on a later post). Learners don't just consume learning content in front of a compute, in the office or on the laptop while sitting on the sofa, but they consume content, even learning content, while commuting or working on something around the house, like gardening. Course content design and delivery needs to evolve in order to keep up with this.


 Comments

xMOOC: of participation and offline apps

 Permalink
**sigh**
The mobile client ate my post! I will try to reconstitute as much of it as I remember ;-)

In this blog post I am continuing the train of though started by thinking about different levels of participation, and my blog post on MOOC registration.  Since MOOCs are generally not taken for credit, and since they generally don't need to conform to some sort of departmental outcomes standard (i.e. this course addresses Program Level Outcome A, D, and E), it would be easier for a MOOC, than in a traditional course, to design several tracks and have different requirements for those tracks. There might also be options for a create-your-track, depending on the course of course.

When a participant registers for a MOOC they can pick their track(s) and the system can monitor the participant's progress.  I think of this like Nike+'s  goal setting. For example my goal was to do 72 miles in 2 (or 3) months. Sure, for a hard core runner that's probably nothing.  For a desk-bound employee who only walks and runs (just for the sake of walking and running and clearing your head) during lunch, that is a lofty goal.  The little progress bar on my Nike+ account tells me how far I've gone, how much I have left and how long until my time runs out, that's motivating!

Like Nike+, too, participants can elect to post their progress on various social sites, like facebook, to get cheers and attaboys from their friends and family. Of course, this can be part of the course as well in some sort of leaderboard where people can get "likes" from their peers when they get something done. This doesn't really do much for me personally, but it probably does for others, which is probably why it's still a feature in Nike+.

I think the combination of picking your goals, at the beginning of the MOOC (although you are always free to change your goals), and being given some feedback as to how you are doing based on the criteria for those goals, would be helpful in the long run.  Sure I am going to get notes from a variety of self-directed learners. If you are self-directed, please ignore my blog post and don't post a comment about how I am stifling your creativity ;-)  You obviously have motivation, and study skills, to spare. My proposals are geared toward motivating those who are not as self-directed as you are :-)

As a side note, due to hectic work schedules, I have not been able to view some videos from my 2 coursera MOOCs. At home I generally don't MOOC a lot, so when do I prep for MOOCs? During my commute, where I don't have access to (reliable) wireless networks on my iPad.  Why is there no coursera app for tablets that allows you to download new lectures as they become available, allow you to submit assignments, and peer review assignments, and take quizzes, and once you get connected again, it can sync your viewed items, your quizzes and your assignments.  Seems like such a no-brainer.  You could also get push notifications when new quizzes, lectures and peer feedback are available.
 Comments

What is participation? How the LMS determines what you do

 Permalink
It seems like Rebecca and I were on the same wavelength yesterday when we were composing our blog posts and reflecting on various aspects of MOOCs.  Rebecca wonders why there is only one level of participation in xMOOCs, and I have to say, having started my 3rd coursera MOOC yesterday (same one as Rebecca, the Design: Creation of Artifacts in Society on coursera), I can see that (from my limited experience) there is a limit on how participation is counted.  Granted, I've spoken out about participation in the past for cMOOCs, but I've considered participation as being active somehow (twitter, blogs, discussions, etc.).  In xMOOCs, and in particular my two experiences on Coursera for the Gamification course and  now the Design: Creation of Artifacts course, a participant gets a certificate of completion having done all the quizzes satisfactorily and by completing the assignments.

This is one level of participation, and it's one of the valid ways to get participation out of the course.  I have to say that the Gamification course hit the right spot: I was interested, and I had some free time to devote to it to complete the assignments.  I was also gathering some research data for an upcoming MOOC paper that I am thinking about writing, so that too was a motivating factor.  The design course has an equally engaging faculty member (in my mind anyway) and the assignments aren't bad; but I think I am in a bit of a time crunch, and honestly the assignments don't seem to resonate enough with me (i.e. I feel a bit bored).  I could mechanically finish them so I could get a certificate out of them, but why bother? I may tackle an assignment this weekend just to see if I am motivated, but don't hold your breath.

This brings us back to Rebecca's point, and to student motivation. If the lecture is interesting, and the professor is interesting, but the assessments are not, how does one, in MOOCs and in "established" course formats, deal with the issue of student motivation and working with the student to meet the course objectives, but still demonstrate mastery of the subject in a way that makes sense for those students?

Let me draw attention to another coursera course, the Human Computer Interaction Course that I am also following currently.  This course has 3 levels of participation, not just one!  Here is what the HCI course offers:

Apprentice track
Weekly quizzes (100%). Students who achieve a reasonable fraction of this (~80%) will receive a statement of accomplishment from us, certifying that you successfully completed the apprentice track.

Studio Track
Weekly assignments (culminating in design project) (worth 67%) and quizzes (worth 33%). Students who achieve a reasonable fraction of this (~80%) will receive a statement of accomplishment from us, certifying that you successfully completed the studio track.

Studio Practicum
ONLY available to students who have received an Apprentice/Studio Statement of Accomplishment from a previous offering. Weekly assignments (culminating in design project) (worth 100%). This practicum is designed for students seeking to continue developing their design skills through an additional iteration of assignments. Students who achieve a reasonable fraction of this (~80%) will receive a statement of accomplishment from us, certifying that you successfully completed the studio practicum.

Now, OK, it's not more imaginative, but it's better than just one track! One of the problems that Coursera xMOOCs have is that they all (seem to) follow a standardized design which might work for some courses, but not for others! The design seems to be as follows:

View video --> take quiz (assessment) --> work on assignments (assessment) --> peer review assignments (assessment).  Discussion forum activity, or other forms or assessment or activity have not been though about, and they haven't implemented.  I suppose this makes sense, since Coursera and udacity were created by and with the help of people who teach technical or scientific fields where the mode of operation is lecture, work on paper, work on assignment, robograde (in computer science your program works, or does not) grade paper, more lecture.  This mode works (well, or not well) in fields like computer science, but not in the humanities. The same mode of teaching does not apply, so what do you do when your platform wasn't built with this in mind? This reminds me of Lane's paper on How LMSs impact teaching. The underlying platform was built with certain constraints in mind, and in turn those constraints get imposed into other courses. This isn't good from a course design, or course teaching point of view! Perhaps time for a better or different platform?

 Comments (2)

Open Assessment and Blended Learning

 Permalink
The topic of open assessment came up during #blendkit2012 this week, which is quite a fascinating topic. Britt asked if peer review can work in small groups, having seen it in xMOOCs like coursera.

I've written about open assessment before, but not specifically about this, I don't think. I have written some quick thoughts on the coursera peer review system which can be summarized even quicker by saying "hit or miss." In the one course (thus far) where I've opted to do the assessments and review my peers, the reviews were a mix. Some reviews of my work were good, others were lacking, and for some I wondered if they even read (or understood) the rubric! So, while I can see how massive open peer review can be good, the fact that its anonymous means that I can't seek clarification, and there is no apprenticeship into the rubric to make sure everyone gets it (and really understands the asynchronous lectures).

Bringing this back into the blended classroom, I think that peer assessments can, and do, work. When I was a student I had some courses where peer assessment was part of the course. The key to making peer assessment work (from my experience anyway):
  • Everyone must be current on the reading and makes sure they get them
  • Everyone must understand the rubric and the proper application of the rubric
  • Being anonymous is a good thing at times, it allows students to be honest. But, there needs to be a junction box to feed back questions about the feedback so issues can be clarified
  • Finally, there needs to be instructor final approval of the peer grading and assessment. It's not sufficient to have students peer assess because, after all, they are novices. They would be in the course they were not novices. The instructor this has the obligation to be the final arbiter of the grade, and full in assessment feedback that is lacking, and filter out irrelevant or destructive feedback.
Thus, I can see peer assessment really work in a blended classroom, if implemented right, and if the learners are prepared for undertaking this task.

 Comments (2)

Mass is relative, and the need for numbers that make sense

 Permalink
This week on #cfhe12 I read a couple of posts of interest from my fellow participants (apologies, I am currently on the train with no connectivity, ore lease I would search for those post and link to them :-) ) and there were two key points that I wanted to reiterate, combine, and expand upon. The first point is that mass (well, "massiveness") is relative. I am sure I learned in physics that Mass is indeed relative there, too, but I'd have to take a MOOC to brush up on my high school physics ;-)In any case, 100,000 MOOC participants in course X does not mean that it is equivalent to 100,000 participants in course Y.

If you have a course (MOOC) that Iran introductory level course (introduction to German for example), you will most certainly get to sign up (and probably retain) a whole lot more people than a more niche course (let's say "Seminal Works of Bertold Brecht" which is taught and discussed, and written about in German). The introductory course will appeal to novices, and people like me who want to brush up. It will appeal to people who just want the language component for travel, news, literature or communication with those long lost, and recently found, relatives. In other words, greater appeal. The Brecht course on the other hand will probably only appeal to people who are interested in Brecht and his works and have the communicative competence to work with German as the primary means of communication (I.e. fewer people than the intro course).

I use another language here as an example very deliberately. More niche courses, especially those in specific disciplines, assume an enculturation into the discipline, an apprenticeship if you will, that intro courses do not. Niche courses assume a scaffolding of the students as a pre-requisite to joining the course rather than having more basic pre-requisites. This apprenticeship into the discipline is essentially the same as speaking another language. Now, whether or not it should be that way is another question and we won't tackle that right now.

This brings me back to massive is relative, and thus we need better metric, better analytics, and better understanding of what those numbers mean. Another MOOC participant wrote about improving the account creation page for gRSShopper. This reminded me of a proposal that I had written about last year as. Prt of #change11: a way to track who is viewing the newsletters (we know they are getting mailed out), who is clicking on the links in the newsletters, correlating that with twitter, diigo, blog and LMS activity to figure out who is participating in some way and who is not. Those who are not participating can be prompted every so often by a "early warning system", like Blackboard's early warning system that alerts instructors if students have not done X by a certain time, to see if things are going well, if the learners need assistance, and if they plan on not participating, why not, and should they be offered a mechanism to unsubscribe (which will record why they left the course). At the conclusion of the course, learners should complete a course survey that gets some feedback from the learners. For 13+ week courses, surveys should be done every 4-5 weeks.

Now some people might cry out "oh think of the lurkers!"... Well, I am! That's why I am now calling them "passive participants" (a little less creepy than lurker). If you have a system in place to record participant activity, you can see who thee lurkers are and what they are looking at such as course videos, synchronous sessions, LMS discussions, twitter posts and blogs (the last 2 from the daily newsletter). If you can get an accurate gauge on how many actual lurkers there are, and how many drop outs there are, you can do a better job at getting the passive participants to participate in some fashion (example off the top of my head: participating in quick surveys before the next week's topic opening and including those responses as part of the topic).

The drop outs you don't have to worry about,they are gone. It would be nice to know why,but you don't have to expend too much time and energy getting them to participate. Passive participants on other hand are good potential resources for everyone in the MOOC, even if the only thing they do is participate in weekly surveys.

Finally, cMOOc vs xMOOC makes a difference. 100,000 on coursera is not the same. 100,000 on a Coursesites/D2L/Canvas MOOC run by Siemens, Bonk, or de Waard. Coursera is like amazon. If you go in for one free class, you might end up signing up for another 5. They are there, they are advertised and they are recommended. cMOOCs on the other hand are a word of mouth endeavor. If you don't follow a certain type of person on twitter (for tweeting or retweeting), you won't know about the MOOC. cMOOCs are all about word of mouth, and as such they also tend to be more niche and focused on higher education. Thus one course's massive numbers don't equate 1:1 for another courses's massive. So please, let's just get rid of the ridiculous retiming LOOC :)
Thoughts?

 Comments

Entrepreneurship (and commercial) activity in education

 Permalink
It's week 3 in #cfhe12 and the topic of the week is Entrepreneurship and commercial activity in education, and I kicked off the week by reading The Evolution of Ed Tech in Silicon Valley and How the Internet is Revolutionizing Education. There are, of course, other readings that I intent on getting to, but these two were the only HTML documents that were easy to sent to Pocket (I did however skim the educational start-ups PDF because I was curious). 

In any case, it was interesting to read about the venture capital process, how it related to EdTech, and how much quicker (and easier) it is to be innovative these days. Now, when I say "be innovative" I don't mean the actual having an idea part, but the ability to execute it. With services like Amazon's cloud services it's easier these days for someone who has an idea, and has some know-how (or access to know-how) to be able to get up and running.  Not that long ago one had to go to the appropriate authorities to buy a server, to put it on a campus network with a dedicated IP, invest in backup and recovery tools, including UPS, and hope that the campus IT folks didn't find out (or pull the plug) on such initiatives.

On my own campus there were stories of "people running servers under their desks," with IT folks saying in a rather disapproving way.  At that point I was younger, more idealistic, and working for IT; thus I too was thinking about it in a disapproving way.  My thought was that they should just contact IT, get the resources they need, and do it officially. This way, they get the right tools to get the job done.  Oh how naive I was :-) Fast forward 10 years later and now I too am trying to avoid the IT department.  Why?  I still like them, they are my friends and colleagues after all, but the organizational culture of a large IT department can be summed up by "batten down the hatches" which ultimately means that entrepreneurial spirits can be crushed.

So, let me go back to this idea of entrepreneurship and commercial activity in Higher Education.  I put commercial activity in parentheses in the title because  I think that starting with the profit motive is a recipe for disaster. One has to fail often in order to find things that work, but the key focus should be on finding things that work, rather than finding things that work enough to sell. I think that educational entrepreneurs need to focus on the teaching and learning aspect of the equation, something that isn't always a commercialized item. The spirit of experimentation and inquiry needs to have, as its master, the improvement of the academe, to get us out of certain old, smelly and moldy situations; not what we can in turn sell.  The cynic in me thinks that we are already selling something - credentialing. You might be able to turn around and capitalize your innovation later (this LMSs and how they grew out of campuses and became their own thing), but that should be a happy by product of what you did to make things better for learners on your school (or consortium).

I think the focus on money and reputation is one of the problems with MOOCs (xMOOCs) today. Sure, I don't think that the people behind coursera and udacity started with this in mind. As a matter of fact I am pretty certain they didn't. But Universities are now looking at the prestigious institutions in Cambridge, MA and want to offer their own MOOCs so that they can get visibility for their programs as well. The problem is that doing something for visibility is the wrong motive for offering free education. Khan, of Khan Academy, didn't think of visibility but the education of the person he was tutoring, and how useful it might be for others.  Notoriety came later as a good by product.

The problem I have with institutions coming into MOOCs the way they are coming is the real danger that it will lead to something like a Dot-Com-Bust. When the bust happened many copycats and "me too"s went away. Maybe they had nothing to offer something to begin with, but in the academic sphere I think every school has something to offer. When little or no money is to be had if and and that bust comes, we might write off Free Open Education, OER, OCW and everything that goes with it as a fad. And, because of a certain gold rush and bust-cycle, it might be that an idea and teaching methodology get's send to the internet dustbin because it didn't pan out in the short time that it was allowed to live and make money (i.e. prove itself).

Thoughts? :-)
 Comments

Last week of Blendkit2012!

 Permalink
Here it is! The final week of BlendKit2012! I know it is only a 5 week MOOC, but it seems to have gone by pretty quickly! The topic of this week, as with any well designed course, is evaluation - or: how do you know that your learning intervention (in this case designing a blended course) has worked and your learners walked away with the knowledge they need to be successful. The reading this week centered around this topic of evaluation. The questions to ponder are as follows:
  • How will you know whether your blended learning course is sound prior to teaching it? 
  • How will you know whether your teaching of the course was effective once it has concluded? 
  • With which of your trusted colleagues might you discuss effective teaching of blended learning courses? Is there someone you might ask to review your course materials prior to teaching your blended course? How will you make it easy for this colleague to provide helpful feedback? 
  • How are “quality” and “success” in blended learning operationally defined by those whose opinions matter to you? Has your institution adopted standards to guide formal/informal evaluation? 
  • Which articulations of quality from existing course standards and course review forms might prove helpful to you and your colleagues as you prepare to teach blended learning courses?
I find it interesting that peer, colleague, and potentially mentor, evaluations are mentioned here because it's not something that I've come across often in instructional design contexts. Usually most instructional design is iterative, you reach the evaluation stage once your run the course, gather feedback and go back to the drawing board in order to improve your course :-)  I actually like the idea of bouncing ideas off colleagues because it means that you can get feedback before you actually run a course, fix any issues that were in your blind-spots, and iterate more rapidly.

I like the statement from Singh & Reed (2001) “Little formal research exists on how to construct the most effective blended program designs” (p. 6) [in this week's reading]. It brings me back to week 1 on Blendkit2012 when I was thinking out loud about the blend, and the potential conflicts of goals for blended courses between college administrators and college instructors.  The admins probably want to see a standardized 50-50 blended course so they can get the most use out of physical locations and utilities; while instructors need to think about what the right blend is for optimal learning experiences.  This, of course, may mean that the utilization of the physical campus locations may not be optimal, as compared to fully on-campus courses; so begins a dance to find the right "mix" for blended courses to make sure that they are both pedagogically superior and making appropriate uses of the campus without imposing a prescribed meeting space and time for courses.

Finally (back to ensuring quality), the readings do provide some more standards to look at for online course quality, and I've already bookmarked most of them. I am already QualityMatters certified (so I am familiar with that rubric) and I am in the process of completing the Blackboard Exemplary Course MOOC, so I am getting familiarized with that.  As the chapter pointed out, some of these rubrics may seem very prescriptivist, but (from what I see) even if you pass the evaluations using such rubrics, this is only the setup.  It's the execution that matters a lot in quality, when the rubber meets the road, when the instructor meets the students and teaching and learning happens. Even if you've designed an awesome on-campus, online, or blended course, if the instructor is not on-board you are destined for not-so-good things.  This is why I think, that in order to ensure quality, the instructor(s) of the course needs to be part of the design, or debriefing process (if the instructor was an adjunct and not there when the course was designed by a peer or an instructional design team) and they need a peer community of practice (those teaching the same course in the same method) to get them ready to teach the course and to feedback what they find into that community, so the course can be improved, and other teachers of that course can learn from each other's experiences.
 Comments

MOOCs, demographics, and wrangling the edtech

 Permalink
Yesterday morning I was catching up on some #cfhe12 blog posts by Bryan Alexander (who I have not seen in a MOOC in ages), a blog post about defining MOOCs  by Rolin Moe,  and my colleague Rebecca who writes about the ease and usefulness in MOOCs†.

First, let me respond to Rolin's points (since I happened to read his blog post first).

There are lots of people looking at the future of academic publishing, pushing for an open movement. Some academic journals have gone open, but the majority of journals carry a high price tag which only exists as price opportunistic for educational institutions (and some rare corporations and organizations). Yet academic journals are part of the lifeblood of scientific research, especially for soft sciences (such as education). By only working with open resources, a cMOOC cuts many of these empirical, peer-reviewed research works out of its circulation, having instead to pull from free resources that often lack academic rigor. For a cMOOC to truly excel at its intention (get people to coalesce around a topic), it is going to have to include the strongest work on the topic, and it will need what today exists in academic journals to do so. As the future of academic journals goes, so does the cMOOC. The movement for open access is important for a multitude of reasons, but perhaps entrants into the cMOOCs should use their collective power/cognitive surplus to lobby for changes to the system, rather than only read about it from outside the walls (and outside the rigor).

I have to disagree with Rolin here.  There are many academic journals, that are rigorous, and are open published.  I think Rolin does a diservice here to open publishing by, essentially, equating them with publications where everyone can (potentially) get on their soap-box and spew any sort of inacurracy that they want.  cMOOCs have used open access, peer reviewed, journals. cMOOCs (and xMOOCs) are not limited to non-peer reviewed works.  It's all a matter of course design and what you are expecting to get out of your materials.  In other words, why are you including peer reviewed materials as part of your course?

I disagree that in order for MOOCs to really excel at their intention one must (always) include peer reviewed journals. One must look at the course objectives for the course, and then pick appropriate materials, methods, activities, (and yes, assessments) in order to achieve those goals.  You can't unilaterally say that peer reviewed journals are a "must."  Here is a counter-example: when I was an undergraduate in computer science I never touched peer-reviewed journals, with the exception of my art and philosophy courses which were outside of my major. I did, however, spend a lot of time solving equations and coding.


At the same time, I am currently enrolled in two cMOOCs: Current/Future State of Higher Education (#cfhe12) and Openness in Education (#oped12). Not only is the majority of the “student” population made up of people in high-level or post-studies academe, but I can count on one hand the number of non-university individuals I have encountered in the courses. There is plenty to consider with that kind of demographic, but in relation to academic access, this group has access to academic journals. Again, Open is one of the four tenants of MOOC, so removing that openness would hit at the bedrock of the MOOC movement, but just because the academic journals are behind a paywall does not mean their contents can or should be ignored.

It is true that cMOOCs do tend to attract people who are already in academia and are in higher-level studies. I think in Rolin's case he is in two MOOCs that are of interest to academia and academics, but not necessarily to anyone else.  If you look at discussions around academia these days, it's all about going to school to get a job.  People don't care about Openess because they haven't been touched by it.  Libraries (funded by taxpayers) do subscribe to some paywall databases, but that doesn't mean that average joe-citizen goes to have a look!  The second reason why cMOOCs are frequented by post-studies academe people (versus any joe-undergrad or any person log time out of school) is that they are not setup in the lecture and test model that is frequently what people expect of education in general.  cMOOCs, seem to me, to require life long learning skills that include culling of resources, pruning of materials, figuring out what's good and who's just pushing BS and so on.  Skills that require refining and practice, and when you are coming in with the expectation that you will be lectured at and then take a test; well the two modes don't connect :-)

As far as "open" goes, I've had this debate with many of my MOOC colleagues.  What is open?  If someone asks you to buy a textbook to participate in a MOOC, does that make the MOOC not open?  I don't think so, I think it's still open; but some of my colleagues would probably disagree.  It all goes back to the whole free beer/free speech thing of the Open Software Movement.  They haven't come up with one definition of "Free" so  I expect that we won't come up with one definition of "open".  For me, Open is a shade of gray.  Finally in response to the following:

In a blog about Alec Couros and PLNs, I remarked positively on the concept of facilitator, or someone who organizes the MOOC but only in a manner to establish discourse, not influence it. Thinking over it again, I am not so keen on a Deist teaching method. I appreciate a desire to not overtly influence discussion and the creation of learning, but how does such an approach account for knowledge gaps? I assume (note: assume) the pedagogy here would take from crowdsourcing, and believe the wisdom of the crowd would provide assistance and fill in the knowledge gaps for those with said gaps. Of course, people like Jaron Lanier see crowdsourcing as a net negative rather than positive, and refer to it as mob mentality. Knowledge gaps can result in faulty conclusions, and if we are to believe Argyris’ Ladder of Inference, this will become cyclical, with individuals seeking out new sources of information that compliment their prior knowledge and beliefs…beliefs built on knowledge gaps and faulty conclusions. Off that angle, people might not have knowledge gaps but instead just be wrong about something, lacking evidence or data to support their thesis. As the subject matter in cMOOCs is not objective, right and wrong are blurry terms; however, novices who come to the course with little subject knowledge or experience would be best served to have at least a base of prior research and theory to assist in their learning journey.  

I think that there is a definite issue with "group think," but this is the case with any course.  But, if you look at graduate level courses (which is what most cMOOCs tend to be based on), there is often no clear answer, no absolute right or wrong.  Sure, in some cases there is a right and wrong - for now, until that is disproven.  The point of graduate education is to be OK with ambiguity and to continue to inquire, push for answers, and to experiment. And then try again.  With undergraduate education (and certainly K-12) we have picked up a banking model of education where we have accepted certain X truths, and we expect to open up people's brains and dump it in.  This may be the way that some xMOOCs operate, but, as stated above, it's also dependent on the discipline. You can't just pain the entire teaching establishment with a broad brush.  Knowledge gaps can indeed lead to faulty conclusions, but that's why you've got more knowledgeable peers around to learn from.  Being in MOOC means that you seek out your peers to learn from them, you aren't lectured at.  If you look at cMOOCs there is usually no assessment piece. I think this is intentional (for the time being).


Now, let me turn to the variety of implementation of MOOCs that is mentioned by Rebecca and Bryan. In all honesty, I am a little disappointed with D2L.  I had messed with it back in 2011 when I was working for my Instructional Design department and we were evaluating candidates, so I knew quite a bit about navigating around the mobile interface.  That said, I still find it clunky both on the desktop and on mobile. I agree with Rebecca.  If it ain't working on mobile, the MOOC is almost dead to me since I fill in my "downtime" during commutes with MOOC blog posts and articles (and when I get back on a schedule, reading peer reviewed articles for upcoming research). When I am at home, or in the office, i have other work to do, so I can't mess around with learning the EdTech for specific MOOCs as much. Sure, I can work on it during lunch, but I would prefer to read something interesting (or respond to it) during lunch rather than figure out where my material is.

I think one thing that makes Coursera MOOCs interesting to "the masses" is the simple-LMS feel to it. See video, take quiz, do assignments, participate in discussion forums. The  formula, the look and feel, and procedures are the same regardless of the course.  The same cannot be said of our cMOOCs where some use blogs and PLNs, others use LMS (D2L, canvas, Blackboard, etc), others use google groups and so on.  There definitely needs to be a balance between experimentation and offering a course.

Thoughts?


† As a side note, I have not seen many blog posts (at least not as many as I would have expected) in this MOOC.  I am wondering if there is discussion happening in the discussion boards of D2L.  Personally, after the first week, I decided to not follow the D2L discussions.  While I do like Google Groups discussions in MobiMOOC, there is just something "off" about LMS based discussions at the massive level.
 Comments (5)

cfhe12 - week 2: when world colide!

 Permalink
After a tittle like that, I feel like this blog ought to have a theme song ;-) Is this too dorky? Not dorky enough?  Chime in through the comments :-)

In any case, it's Week 2 of #cfhe12 and the topic of the week is New Pedagogies: New models for teaching and learning. I find it interesting (and ironic) that Blended Learning and Online Learning are considered "new pedagogies" and "new models."  Even though I am currently undertaking 2 Blended Learning workshops (one MOOC #blendkit and one workshop through Sloan-C), I have known about blended learning for a while.  As far as Online Learning goes...I've known about it, and been active in it for much longer!  How can these models be considered new?  To me MOOCs are new because we are still exploring them.  There is no "one MOOC format", just as there is no one Online Course format. MOOCs are a subset of Online Courses, and MOOCs have many other courses that are a subset of a MOOC.

That being said, I am drawn back to "rigor" and what it means to be "rigorous" and "effective." Granted, the InsideHigherEd article was from 2009, but it amazes me that a method of delivery can be seen as less rigorous simply because of the method of delivery. By the same token, I was reading another article on InsideHigherEd (Bitter Reality of MOOConomics) from this past summer where there is a catch-22 for Universities.  Universities, in the past, have had their cachet was in limited spots, and therefore selectivity and limited amount of accredited individuals; and of course the social network you developed. With MOOCs that goes out the window because you have potentially a massive amount of people being "accredited." In some fashion.

The second IHE article talks about getting jobs as the primary motive for people going to college, something we tackled last week on #cfhe12, and something we will most likely see, and talk about, again before this MOOC is over. If people are coming to school for credentialing purposes only, then we have an issue, because the goals and expectations of students are at odds with the goals and expectations of the institution and its representatives: faculty and staff.

[setup] I had an interesting discussion with colleagues last week over the length of courses: again form and versus what needs to be covered and evaluated.  My feeling was that one can have a 13 week on-campus course and a 6 week (intensive) on-campus course, and (more or less) get a comparable educational experience. Sure, it may feel like you're under pressure and you're running to get things done, but with a few modification to assessments you can do it.

In an online space this doesn't work.  You still have the same amount of time, but psychologically (I argue) nothing else changes.  The online classroom is the same whether you have a 4, 6, 8, 10, or 14 week semester.  You can pack in "more materials" but that's about it. In an on-campus class, from a psychological perspective, things change, you meet, in person, twice as often, which signals to the learner that the expectations that the shorter-length course is the same(ish) as a regular semester but you still are expected to cover the same materials, and be assessed on the knowledge you've gained.  In an online course, without other external stimuli, it's easy for learners to "forget" that they are in a shortened-length course, but they are still required to cover the same bases as the "regular" length course. This can breed discontent among students.

[punchline] OK, so what does my little anecdote have to do with the future of higher education.  After this very invigorating debate, some of my fellow faculty members said (or claimed) that (from a student perspective?) the reason to take shorter length courses is to "easily" get 3 credits and move closer to graduation in a shorter time frame.  While I understand that this may be in the minds of students - given that they think that the purpose of education is purely utilitarian (i.e. get a job), but I felt a little uncomfortable with the prospect that faculty (who self-govern their programs) may be starting to think this way too!  It's  up to the faculty to keep the spirit of Higher Education (inquiry) alive, to  find the right blend of inquiry of inquity's sake and relatedness of knowledge to "real" life; and when I hear that maybe we ought to capitulate to the need of the moment (i.e. get a job), I feel that academia has betrayed me. Where is academia headed?


Your thoughts?

 Comments