Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

Siri, Alexa, Cortana...OK google - show me something to learn!

 Permalink



Alright, so here it is, week 6 of NRC01PL. Even though I am technically in the same week as everyone I guess I am still marching to the beat of my own drummer.  I wanted to join the live session on Tuesday, but other things intervened.  Oh well.

The topic of this week is the personal learning assistant.  Hence my little callout to the four major virtual assistants (Siri for Apple, Alexa for Amazon, Cortana for Windows, and Google...for Google). I actually did try asking Cortana to "show me something to learn" but  I guess the bing search engine didn't know what the heck to do with my query. Google wasn't that much help either.  We haven't reached the point yet where they know enough about me in order to recommend something.  It's a little odd given how much data google probably "knows" about me.

So, what is a Personal Learning Assistant (not to be confused with Personal Assistant for Learning)?  According to Stephen the PLA is a platform that  (1) provides a convenient interface for the user to perform the task (of learning), (2) the platform treats each user interaction as a training example (think amazon recommendation engine); and (3) it learns general regularities from this training data (think google knowing where my home and work are based on how much time I spend at locations, and during what times).

Another thing that Stephen mentioned was something called Business Oriented Personal Learning Agents (reminds me a lot of my HR and KM days). Some components of this are (1) human learning profiles across demographics (meaningful demographic, not the silly stuff that we fill out), (2) integration of operations and training databases; (3) unbiased evaluation of performance (I assume this is by your manager); (4) development of enterprise learning profiles and patterns of learning and action; (5) making training available in multiple modalities (and multiples providers and sources).

This seems like something that is quite interesting from a work perspective.  The one concern I have about this is data lock-in.  In days past people used to stay with one company for a big part of their career.  Learner data-lock-in would not be such a big issue if you're someone like me (at the same institution for close to 20 years now), however if you're like some of my classmates, you've worked for at least 3 different companies in the last 10 years.  Having an internal gauge of how employees are doing, what they need to learn, and how effective that learning is great. It helps corporate instructional designers and talent developers do their jobs more effectively.  However, I do think that this data does also be long to the learner, as it forms part of their lifelong learning record.  If they leave that company, and if the data is proprietary (or under some sort of NDA) then that, to me, is a bit like a brainwipe (at least a partial one) for the learner's record. If there is a need to keep some information compartmentalized due to NDAs and a company's competitive advantage, then I'd like to see an appropriately scrubbed and generalized learning record exported concurrently to the learner's preferred performance and assistance platform.

Finally, when we're thinking about the personal learning assistant, I am reminded a lot of the Knowledge Navigator (see video at the bottom).  While this was meant to be a concept for PDAs, I think we're still seeing a lot of this vision coming to fruition today with our connected devices.  I think the PLA also falls into this category.  The problem, as I brought up in my previous blog post, is that we have a lot of data about us out there, but they are inaccessible to a central platform of device that crunches all of that into something that is useful for the learner.



 Comments

Higher Education questions - 7 questions

 Permalink
It seems that Inside Higher Education is playing a game of 7 questions. I thought that it would be interesting to respond to these when I has little more brain space to write some more in-depth answer instead of "agree or disagree" which was the original prompt.  These might very well fit into my Educational Leadership course now that I think of it.  So the questions are in italics, and my responses are in regular text.


1) A higher education program where students graduate with a credential, but without substantial career development, is a failed experience.

It depends! I don't necessarily see higher education as being concurrent with career development.  Sometimes, in some programs, and certainly depending on the degree, the benefits of higher education are seen in the long term, not just in the short term after graduation (i.e. gaining a new job or obtaining a promotion).  Some programs require apprenticeships or practica.  In such cases I would say that the academic can go hand-in-hand with  the career development.  However, not all programs are like that.  Some students do not seek programs that have required practica because they already have a job and they can't take time off from that to undertake the apprenticeship requirement.  By necessity programs do have to appeal to a fairly board learner demographic, and such a requirement is limiting.


2) Student affairs/services should scale their engagement efforts via intentional (and sustainable) digital outreach. Not knowing the tech isn't an excuse.

Here I agree. However, I will say that technology is not necessarily the issue.  You can use technology to enable you to increase your outreach, but your audience maybe not be able to access you in a digital means.  Technology isn't always the solution to the problem.


3) UK student services is nearing an inflection point...stay tuned for administrative structures (budget/personnel) that look more like US student affairs administrative divisions.

I am not sure about the UK structures. In the US we may be reaching that inflection point.  Everyone is complaining about the inflation of administrative positions at the universities in the US, which increases costs, so we might be seeing a shake up - stay tuned


4) The student experience affects an institution's brand and ability to be competitive. A bad experience is bad for marketing and enrollment.

Maybe...maybe not. It depends on what the bad experience is.  If a student is a bad match for the program that they were accepted into, then I think the blame is partly the university (for admitting such as student when the fit may be wrong) and on the student for not doing their homework to see what the program they applied to is all about.  If a learner doesn't want to do the work for a class and they get a bad grade (or they don't pass a competency), then they can complain all they want - it's not the university's issue.  However, if there are structural issues with student support, and the university does nothing about them - then that will hurt the university's competitive advantage.


5) No one is a digital native/immigrant...we all have unique levels of digital capability regardless of age.

Why are we still talking about digital natives and immigrants?  Let's move on. That should tell you where I stand.


6) Online-only degree programs are as worthwhile as traditional campus-based experiences.

It's 2016, why are we still taking about online programs within a deficit context? Let's move on!


7) Staff need digital capability/literacy in order to teach digital capability/literacy. You can't have one without the other.

Again...  this seems like a no-brainer to me...




So, what do you think of these?

 Comments

Will MOOCs replace the LMS?

 Permalink

My apologies, in advance, if I seem rude.  One of my teachers in high school (maybe a few of them, in fact!) said that there is no such thing as a stupid question.  Perhaps this is true in the context of a classroom where if a learner (or group of learners) don't get a concept and they wish to ask a question to disambiguate.  Sometimes the questions we pose also demonstrate our understanding of the basic component that build up our question and hence our question can shine a light on things we've misunderstood and give an opportunity for more knowledgeable others to help us correct misconceptions.

However, this is not the case.  Will MOOCs replace the LMS is a really stupid question. I was reading a post over at YourTrainingEdge that was titled Will MOOCs replace the LMS. I actually came to it thinking that it was a bait-and-switch type of situation because the two aren't comparable. a MOOC is a course (and in the corporate sector I would say that the most likely type of MOOC is the xMOOC), and an LMS is a set of technologies that allow one to build courses, offer them, and track learner progress. An LMS is not a course.

The authors start off (sort of) with explaining that MOOCs and the LMS are not the same, in fact they say "Many trainers confuse LMS with MOOC, which needs to be stopped."  Phew!  Now that takes a load off!  So I continue reading for something enlightening... and I come across this: 

Now let’s see why MOOCs are going to replace LMSs in 2016. MOOCs, in my view, are not only for college students or budding programmers any more. The courses offered from top notch MOOC providers like yourtrainingedge, Coursera, EdX, and Udacity have, until recently, been mainly focused on the academic setting. In addition, all of the main MOOC vendors have developed their classes by means of partnership with renowned and prestigious universities like MIT, UPenn, and Stanford. However, evidences show that academic and students might not be the only user base for the MOOCs.

OK, for me this is a massive facepalm.  Basically what the authors are arguing is that self-paced elearning created by a third party will replace you in-house training. That's perfectly fine.  As a matter of fact it's nothing new!  Companies have been purchasing access to courses on Lynda.com, Microsoft, and SkillSoft for many years now - way before MOOCs came along.  The only narrative that is changing is that of prestige.  The self-paced elearning is, perhaps, not as prestigious because it's developed by your own in-house team, or by some nameless instructional designer at Lynda.  However MOOCs...well...those have the names of big name schools and professors behind them (rolling my eyes).

Listen.  I have no problem with partnering with coursera or universities directly to develop courses specifically for your corporation.  I think it's a fine and dandy idea.  What I really dislike is the repackaging of the old, adding some new luster, and calling it a new and improved product.  Let's be honest about what we're selling and how it differs from what's currently being done.


 Comments

Who's a teacher?

 Permalink



With the semester over, and the brain working on momentum, I've decided to capitalize on the spare brain-power, and time, to finally read a book that I agreed to write a review for back in the summer (yeah, I know - a tad bit late...). The book is a collection of articles titled Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (an IGI global title).  I'll come back to the topic of the book as a whole after I am done with this process.  I think that going through chapter-by-chapters, picking and reacting to some things that piqued (and poked at) my interests is a little more interesting that trying to condense 15 chapters into one book review. This is sort of what I did with the #rhizoANT review.

Chapter 1 is titled Mining a MOOC: What Our MOOC Taught Us about Professional Learning, Teaching, and Assessment.  The abstract gives us a sense of the article:
In July 2014, a massive open online course (MOOC) entitled The Assessment and Teaching of 21st Century Skills (ATC21S) was offered within the University of Melbourne's programme. Designed as a research engagement and dissemination initiative, the ATC21S MOOC enrolled 18,000 education practitioners, predominantly interested in teaching and assessment of complex 21st century skills. This chapter describes the experience of developing and teaching in the MOOC, and of learning through it. The authors suggest areas for ongoing research, and highlight areas in which MOOCs may stimulate broader change. This chapter commences the dialogue for the opening book section – policy issues in MOOCs Design, and responds to the topic of ‘emerging technology and change management issues for eLearning in the MOOCS environment.'
This article seemed a bit like an action research project, which is fine, but it did not really add to my own understanding of MOOCs. It does provide some data, which in aggregate can be considered as part of the xMOOC learning environment, but the MOOC aspect of the article didn't provide much for me personally.  On the other hand, some comments, and assumptions about technology, did pique my interest a bit.  For example, right from the start the authors comment that MOOC platforms are still in their infancy.  While this may be true when discussing platforms like coursera and udacity, we've had the LMS around for at least 20 years.

Another comment "The platform determines the organization of the materials and the processes of the course..." while, in it does ring true, it seems to me that taken together with the previous quote is sort of an excuse to work within the confines of what the MOOC LMS allows.  While I don't consider myself an EduPunk, it's kind of hard to think of MOOCs (these days) and conceive of people painting within the lines of the LMS when what kicked off MOOCs was this sense of the untamed and MacGyvering to reach your aims. In other words, your aims were not determined by what you had available.

The authors asks us to consider that "'teaching' should not be conflated with what a teacher does."  This is true, in a sense.  What a teacher does is teaching, however teaching isn't solely defined by the actions of a teacher.  Fellow students can be teachers as well, if we - for example - take a Vygotskian view of the more knowledgeable other who helps scaffold fellow learners to new learning. That said, I do find it a bit problematic to consider the platform as a teacher "who tirelessly organise[s] the learning experience".

While I do think that technologies can be actors in a learning network (at least from what I've read and experienced with the ANT readings) and they can influence how actors connect and work with other actors and knowledge in that network, I think that the authors of this paper are giving technology, and the LMS in particular, too much of an active role.  The LMS is an inert piece of technology. It does not organize anything. A human actor acts to organize the learning materials, and perhaps learning opportunities, that occur in that learning network.  While, from a connectivist view (if I am interpreting connectivism correctly), the learner can access 'learning' from a non-human appliance, I don't think that the act of providing materials is the same as being a teacher.

In their conclusions, the authors indicate that the "distinctive teaching power of a  MOOC arises from the combine 'teaching' efforts of  three components: a course team of collaborating professionals; a digital platform that tirelessly organises and provides feedback to learners; and the peer teaching capabilities of a collegial, experienced, qualified, group of participants".

Those three components, in my view, are available in traditional online courses as well, so I am not sure how MOOCs are different in this view.  However, I do think that there is a subtle distinction here around the concept of peers: they are collegial, experiences, and qualified.  This to me indicates that MOOCs do have pre-requisites (and those should be encouraged during development) and there is an aspect of collaboration hinted at with the collegial piece. I don't know if I've read in other pieces in the past about "ideal" learner characteristics for MOOCs.

Next blog post, chapter 2.  What do you think of chapter 1?








Citations:
Milligan, S., & Griffin, P. (2015). Mining a MOOC: What Our MOOC Taught Us about Professional Learning, Teaching, and Assessment. In E. McKay, & J. Lenarcic (Eds.) Macro-Level Learning through Massive Open Online Courses (MOOCs): Strategies and Predictions for the Future (pp. 1-24). Hershey, PA: Information Science Reference. doi:10.4018/978-1-4666-8324-2.ch001


 Comments

RhizoANT and email

 Permalink
The other day Rebecca posted on her blog and asked how we (I think she meant other RhizoANT collaborators) view email.  How is email different from other technologies that we use to communicate with one another for various projects.  In a previous RhizoANT post I wrote about (what seemed to be) our main vehicle for communication, the Google Doc.  Of course, as Rebecca points out we also used email to discuss some topics off the record, sort of like the sidebar that lawyers have with the judge in a court case.

Just to kick off I'll start from the stance that I don't hate email.  I do my best to be at inbox-zero.  It never really happens for me, but I do my best.  At any given time I have anywhere from 5-10 email messages that need my attention.  As I respond to them, I archive them (no need for filing, just hit archive in gmail!)  While I have access to Google Inbox I have opted to not use it.  I prefer the look, feel, and functionality of GMail "classic" and, at least according to Rebecca, GMail classic has better search functionality, which for me is a key feature because I don't bother filing anything.

Now, there is one 'feature' of email (in general) that I don't like.  Every time someone responds there is a loooooooooong appendage to their email with the history of the communication.  I know that this is a good feature for replying (so that people know what you are replying to), but we, as a RhizoResearch team, tend to use email conversationally. So we might just add a sentence or two as a response. There really is no need for the history.   Also, participants respond at different times to different messages, so it becomes and experience of trying to piece things together after the fact.

When we are discussing via email, in a conversational manner it's not a problem that messages become convoluted and we have email chains like the one pictured.  However, when we discuss deliverables, and we are planning how we will proceed with a project, these email chains become unwieldy in trying to figure out who is doing what - or heck forget about others for the moment, email makes it harder to figure out what I agreed to do at some point in some email without going back and looking at everything and all interactions.  I think a way around this is to have a recording secretary for the meeting whose task is to keep track of an email thread and pull out actionable items (can't this be automated?)

The other thing I wish email did better was to better manage my identities.  I started using email around 1996 or 1997. Yahoo and Hotmail (before the Microsoft acquisition) were my first two email addresses. I still have them and use them. I also have 4 GMail addresses, and I use every one of them for different reasons.  I have some work colleagues who have been acculturated into email use in the following way: If they want to make sure that you see something, like right NOW, they will email ALL of the emails they have for you.  Now, why should I clean up 6 email accounts after I've read and responded to the email?  Why not have an ability to manage my multiple identities at one spot? This isn't a problem with those one or two work colleagues who exhibit this emailing behavior, it's also a problem with collaboration.  I use my university email (email #8...which is actually managed by one of my gmails) as my de facto contact for research collaborations, but what I really use to when the rubber meets the road for actual work is gmail (google docs), so my communication gets fragmented among different email accounts, or worse I have to tell people to invite multiple gmail accounts to collaborative documents to make sure I get things.

There's got to be an easier way ;-) Part of me is wondering if we've exceeded the reach of email. Email seems to be a contemporary analogue to traditional mail, which is brought to you by the post office.  I have many fond memories (and old letter to prove it!) from dear friends and pen pals from 20 years ago. Mailing something took a week to get there (more or less), and another week to get back. If people wrote back to you right away the travel time would be 2 weeks. If they took longer obviously it would take longer.  The heuristics of email are similar to traditional letter writing - despite the instantaneous nature of email.  The usage of the system by its users has evolved to take advantage of that instantaneity, however the heuristics of the system have not.  Have we stretched the metaphor too thin for email?  Is it time to bring back Google Wave? ;-)

your thoughts?
 Comments

Swarn the Google Doc, or so says the ANT

 Permalink
Did someone say "swarm"?
Alright. I've completed the first half of Latour's book on Actor-Network Theory titled Reassembling the Social: An Introduction to Actor-Network-Theory. In a couple of blog posts (really soon) I will be continuing my exploration of ANT through this dialogue I've developed with Latour. I also, at the recommendation of Maha (I think) read Cressman's brief overview of ANT (PDF here). So now, inspired by Maha's post, I turn my attention to utilizing ANT (whatever my rudimentary understandig of it is) toward an analysis of the #RhizoResearchGroup's use of collaborative technologies.  Specifically I am dealing with Google Docs (or at least some elements of Google Docs given that ANT can lead you down a rabbit hole).

Briefly I would describe ANT as being a philosophy, or frame of mind, that attempts to account for both human and non-human elements in various interactions. Non-human elements can be technology, such as the keyboard I am using to type this up, but also things denoted by collective nouns - such as corporations, groups, senates, parliaments, and SIGs.  Actors can be just actors, or they can also be networks.  It just depends on what frame you are looking at them from. This type of complexity is a bit mind-boggling because  there are many ways of interpreting some occurence. Latour even says to be prepared for failure, which doesn't bother me, but knowing that consensus is going to be hard to come by makes this activity a bit frustrating.

Anyway - I am not focusing on Google Docs, specifically the word processor part, and our use of them for the #rhizo14 collaborations. What I remember using this for were the following (might be interesting to compare our lists). The list is in some sort of order of recollection (not of significance):

  • collection of our various stories for the collaborative autoethnography
  • creation of the Untext
  • Writing of the CAE
  • Preparing for #ET4Online discovery session
What's interesting is that I don't remember the brainstorming document that Maha mentions.  It must be that different things are more vivid in the minds of some actors than others.


In any case - Presenting ANT meets Google Docs (meets RhizoResearchGroup!) I think that the place to start is to acknowledge that technology is neither valueless nor value-neutral. Technology should also not be seen as a black box. Design decisions made during the design and creation stage of any product have a way of impacting how that product shapes the interactions that users have with it, and with others who are using this product.  Here are some of my observations using the Word Processor (WP) function of GoogleDocs (GD).

WP were developed with the mindset that there is one user with one keyboard.  Think of a typewriter and a person typing.  The document that is being written on the typewriter is almost done. It's not a rough draft, it's not some ideas that you are throwing on the page.  This was done, presumably, on paper before you even touched the keyboard. The draft you are creating with the WP is mostly dealt with as a draft-for-comment, your final draft before someone else sees it.  This is due to the historical development of the WP as an electronic update to the typewriter.

The WP does add functionality, however.  The processor part does allow the author (or typist if the author can't type) to go back and edit.  To cut things out, to paste things in, to process the text by adding and removing formatting like typeface, size, style, and various formatting options. It even allows for editor functions, like adding comments, editing text and marking changes and who made what change when.  Of course, even this collaborative function isn't really  collaborative but cooperative.  The software still treats the document as having 1 owner who is ultimately in control of the document, and others who can come in, make some small changes, and add comments. These are seen as parenthetical and on the side, so they aren't meant to distract from the main part of the document, which is the focal point.

So, how does the historical development of the WP affect how we interact with it?  For one thing, the focal point of 1 document, 1 (or few) ideas, 1 author, and many in the peanut gallery make it hard for a swarm to come in and try to make sense of a document in a linear fashion - which is what the WP was created to do: put something into a linear document for consumption by others. Even when reading one person's contribution (e.g. their story in the collection of stories document), the reactions people had to parts of the story (agree, disagree, adding to, cheering on, etc.) had the effect of bringing you out of that narrative and into the sidebar. In essence the WP became a faux-hypertext document in that you could take branching paths to go to different places in the document as you read, but unlike a hypertext document where the author can control whether they click somewhere to go, those highlighted passages, and dotted lines - like the Sirens in the Odyssey - lured your attention from the main document to the marginalia. This isn't bad per-se, but does the technology, and how it presents itself to us, influence how we interact with the words on the page? I would say that WP is not setup for a swarm approach to document writing, even though we made it work.


This "issue" of marginalia, and other attention breaking devices, is device dependent.  If you read the document in a mobile browser, something like a smartphone or tablet, chances are that you didn't see the marginalia, so your attention was not taken from what individual authors wrote. If you did not have full permissions to the document you might see it, you might not see it, and if you did see it you might now have had the comments showing.  Or, you might have been able to make "suggestions" rather than edit right on the page. This has an implied power-dynamic, and it brings me back to the origins of the WP, as a one person-one document setup.  I think that this also, at a subconscious level, had an effect on how we interacted with the text originally. Instead of going in and stepping on someone's toes by directly editing their contribution we added marginalia, asked questions, tried to negotiate, instead of actually allowing the swarm to work it out - to go in and change parts, without permission, and see where the document ends up, as opposed to the negotiation aspect that we've been enculturated into.

From a Power dynamics perspective, comments and the "resolution" of them, again assumes a certain power structure - that the 1 author of the 1 paper has the power to accept or dismiss the comments made by other people.  Even if we are all authors, the fact that I have the power to "resolve" Keith's comments means that others may not even see what he wrote as a comment in the document, one node - one actor, has much power over that network in this sense..

Finally, ideas will not begin with the WP.  They are carried there by actors. These actors have interacted with other actors, both human collaborators and other non-human actors such as collaboration technologies.  As Maha wrote, what ends up on the Google Doc hides much of what happened outside. This reminds me of Latour's fifth uncertainty - when you start notating and jotting things down you lose the richness of what has transpired.  Even with this blog post, I've boiled down something really complex into one post.  Even with this post I have not discussed everything that could be discussed about Google Docs, WP, power, and design decisions around software and group processes, so much is lost. To some extend the fifth uncertainty seems rather nihilistic...


Thoughts?




 Comments

Attention splitting in MOOCs

 Permalink
The other day I caught a post by Lenandlar on the #Rhizo14 MOOC which is over, but we amazingly are keeping it going.  At the end of his post on motivation that I wanted to address, since they've been on my mind and they've come up a few times in the past week.

Are MOOC participants in favor of shorter or longer videos or it doesn’t matter?  

I can't speak for all MOOC participants, I can only speak for myself, and from my own experiences. I can say that video length does matter, but it's not just about the video length.  On average, I would say that you don't need a video that is longer than 20 minutes. My feeling is that if I want to watch a documentary, I will watch a documentary, not participate in a MOOC. Anything longer than 20 minutes is probably unfocused and not suitable to the medium and the goals of the course.

Of course, simply having 20 minutes to work with doesn't mean that you should take up all that time.  This goes back to figuring out what your message is, what you need to talk about, how you are going to present it, and what the ultimate goals are of the video.  I think that Grice's maxims are a perfect fit here :). If your video is 20 minutes long but is just right for what you intend to do with it, great.  If your video is 5 minutes but it failed miserably, then you wasted my time, or worse you diminished my interest in a topic I was previously interested in. At the end of the day, it's not about the length. So long as the learner knows the duration of the video, and any dependencies (i.e. do I need to watch something else before I watch this), if the video is well made, on point, and on-time, you are OK. The learner can carve out the time that they need to watch certain videos if they know the duration ahead of time.

What is the extent of discussions taking place on Forums set up for MOOCs?  
Again, this is only my experience.  I think that some forums work well, and some do not.  Forum that work as list-servs, for me, work well because I can keep an eye out on things that are happening in the forums while I commute on my smartphone, and respond accordingly.  If I have to wait to get home, after all else is done, then I am lost in a sea of posts.  This is useless, so I avoid those forums.  A good example of forums working well was mobiMOOC 2011.

There is of course another element here, and that is learner choice.  If forum discussions are created with prompts, like traditional online learning, then the forums get barraged by the 2, 3, 4, 10 ,15 possible answers and you end up having a lot of repetition.  There are countless examples of this, but one that comes to mind is the Games in Education MOOC that I did last fall.  Interesting stuff, and I did try to participate in the forums, but as soon as one person makes a post about a particular game (let's say Metal Gear Solid), then why are there six other threads with the same game?  Those should all be in one thread.

Meaningful discussion could conceivably take place in a MOOC discussion forum, but I don't think that the variables have yet been determined as to how to best setup a forum from a technological and a pedagogical end. The other thing that comes to mind is this notion (from PLENK2010 research if I remember correctly, Kop et al?) there were quite a few people who seemed to be "refugees from the forum" who started blogging. Having an alternative vehicle is great, but the thing I started pondering in Rhizo14 was how many media can a learner reasonably keep track of at any given time?  For me, in rhizo14,  the Blog and the Facebook group were primary.  P2PU secondary (check in every few days), and twitter tertiary, in other words whenever I could remember.  Two primaries and two secondaries are what I could handle (and not that well I might add).  So in MOOCs, where forums don't work well, or where forums are an option among other venues (Fb groups, G+, twitter, blogs, wikis and so on), what toll does that take on the learner?

Does course duration matter to MOOC participants? If so, what is an optimal length? What is too short? What is too long?
I will refer you back to my video answer for this one ;-). In all honesty, it depends on the subject matter at hand, and who you expect the learners/participants to be.  In some cases, like CCK, the course was structured for 13 weeks (if I remember correctly).  Perhaps this was a university requirement, since it did run for credit at the University of Manitoba, but it may just as well have been a design consideration outside of university norms.  That said, I would say, from the research I've read thus far, such as Weller's analysis of Katy Jordan's data (I think I've seen a recent article on IRRODL that I have not read yet by her) the  sweet spot seems to be six weeks for MOOCs.  Now, I think this data is based on coursera xMOOCs, so the design decisions for those MOOCs are probably affecting the appropriate length.

Going back to my earlier comment, I would say that if you think of your message, and your delivery, and your goals, you will have an idea of how long the MOOC needs to be.  I will go ahead and state that a MOOC that is less than 3 weeks is not really a MOOC. I don't know what it is, but a MOOC it ain't (assuming C = course).  It took me a couple of weeks to get acclimated to the people in Rhizo14, even if I knew some of them from before.  Depending on your participation in the MOOC, it may take you a week to get comfortable and in the head-space to be where you need to participate, or lurk/consume.  Thus, three weeks are, for me, the very minimum needed.  The max...well, current research seems to indicate six, or maybe eight, weeks, but this depends on a variety of factors.



 Comments

#edcmooc Human 2.0? Human+=Human?

 Permalink
Vice Admiral Tolya Kotrolya
Well, here we are! The final(ish) week of #edcmooc.  As I wrote in my tweet earlier this week, I think #edcmooc, like #ds106, is probably more of a way of life for some than an actual course.  Sure, there won't be additional prescribed readings or viewings after this week, so the course is technically over, however the hallmark of any good course, as far as I am concerned, is one where the learners keep thinking about the material long after the course is done.  They keep engaging with the provided materials and with any new material that they encounter in critical ways.  In other words, once the course is done we don't shelve and compartmentalize the knowledge gained and leave it to gather dust like an old book on the shelf.

In any case, like previous weeks, here are some thoughts on the hangout, readings, and viewings from this week. To be honest, I don't particularly care about a certificate of completion, but I am interested in designing an artefact as a challenge to myself.  I am predominately text-based, so doing something non-text-based (or artistic) would be something that push me a bit.  That said, I am not sure if I will be able to do this in one week's time. What do others think about the artefact?  Challenge accepted?

From the Week 3 Hangout

Last week (Week 2 Hangout) the group mentioned a quote, which was repeated (and corrected) this week: "We have a moral obligation to disturb students intellectually" - Grant Wiggins.
This made an impression on me, not because of the content of the quote, but rather the succinctness of the quote.  The content is something that I, luckily, have been exposed to in my career as a graduate student.  In my first graduate course, an MBA course, Professor Joan Tonn loved to tell us "can you give me a quote?" whenever we claimed something to be true based (supposedly) on what we had read.  This was her way to not only get us to back up our claims based on the research we read, but also get us out of our comfort zone so that we could expand our knowledge and understanding.  I remember my fellow classmates and I scurrying those first few weeks in class to find the page number in our books and articles where the our supporting points were mentioned in order to be able to respond to can you give me a quote?

Another example comes from the end of my graduate studies, while I was finishing up my lat MA degree.  It seemed that Professor Donaldo Macedo's  favorite phrase was can I push you a little? In sociolinguistics he "pushed" us to think a bit more deeply about the readings, deconstruct what the readings were, and deconstruct why we held certain beliefs. This isn't always comfortable, thus the initial question can I push you a little (you could always say no, by the way).  In psycholinguistics Professor Corinne Etienne kept us on task, like Joan Tonn we also needed to quote something, and make sure that what we were quoting supported our arguments and answered the posed questions.  This means, that unlike certain politicians, we couldn't side-step the original question and answer the question that we hoped we would be asked.  These are all examples of disturbing the sleeping giant that is the mind, waking it up to do some work and expand its horizons, not just respond back with some canned, comfortable, response.

In my own teaching, I have adopted a devil's advocate persona. I try to disturb the calm waters of the course discussion forums by posing questions that may go a bit against the grain, or questions that might probe deeper into student's motivations to answer a certain way. I prefer the title of devil's advocate because unlike can I push you a little it doesn't necessarily mean that I hold the beliefs that I am asking probing questions about, but rather it means that I am interested in looking at the other sides of the matter, even if I don't personally agree. I don't participate in many faculty workshops or things like pedagogy chat time, so I often wonder how my peers do this in their courses, intellectually disturb students, in order to get them to expand their thinking.  If any teachers, instructors, or professors are reading this, feel free to share in the comments :)

Another interesting mentioned by Christine Sinclair is that it is difficult to mention contributions of 22,000 participants in a MOOC.  I have to say that it's difficult to mention all of the contributions of a regular 20 student course in one hour's time frame! In the courses that I teach I try to do a 1 hour recap podcast every week or every other week (depending on how much content is available, and how conducive the content is to an audio podcast) and I have a hard time finding the time to mention everything that was important to mention!  I can't imagine how many hours it would take to read, prepare, and produce a live hangout to get most of the contributions mentioned.  The MOOC Hangout would be like C-SPAN ;-)

Another difficult thing to figure out who wants to be mentions and who does not. This is a problem with a regular course of 20 as well. If you have a public podcast about the course, even if it's only "public" open to 20 students, some students don't want to be named because it makes them uncomfortable.  For my own course podcasts I go back and forth between mentioning names, or just mentioning ideas and topics brought up and acknowledging the contributions of students that way. The people who wrote about what I would mention in the podcast would know that it was their contribution that was mentioned, and they would (hopefully) feel some sort of sense of acknowledgement, and thus get a sense of instructor presence in the class this way.

From the Videos

In the videos section of #edcmooc this week we had Avatar Days, a film that I had seen this before. One of the things that I was reminded of was that I really liked the integration of avatars in real life environments. It is a bit weird to see a world of warcraft character on the metro, or walking down the street, but it's pretty interesting visually.



I do like playing computer games, but I am not much of an MMORPG person. I like playing video games with established characters, like Desmond (Assassin's Creed), Sam Fisher (Splinter Cell), Snake (Metal Gear), and Master Chief (Halo). I play for action, and to move the story forward.  For me these games are part puzzle,  part history, part interactive novel.  I only play one MMORPG and that is Star Trek Online. The reason I got sucked into it was because I like Star Trek, and this is a way to further explore the lore of that universe. I have 3 characters in Star Trek Online (one for each faction of the game) and while the game gives me a spot to create a background story for them, it seems like too much work. I really don't see my characters, one of whom you can see above  - Vice Admiral Tolya Kotrolya, as extensions of myself.

Watching Avatar Days again had me thinking: Are avatars an escapist mechanism? A way of getting away from the mundane and every day? Are they extensions of real life, or someone you would like to be, or whose qualities you'd like to possess? How can we, in education, tap into those desired traits that people see in their avatars to help them move forward and accomplish those things in real life?  For instance, let's say that I didn't play by myself most of the time and I were really active in my Fleets (equivalent of Guild in WoW), and I wanted to be the negotiator or ambassador to other fleets, I would guess that I would need some skills in order to be able to qualify for that position. Let's say I continue to hone my skills in that role. Now, being an ambassador in real life is pretty hard (supply/demand and political connections), but can you use those skills elsewhere?  This is an interesting topic to ponder. I wonder how others of their avatars.



True Skin (above) was also quite an interesting short film. The eye enhancements reminded me a little or Geordi La Forge, the blind engineer on Star Trek: The next generation.  These enhancements made me think a bit of corrective lenses for people with myopia or presbyopia. In a sense people who need eye glasses or contacts to see could be considered more than human if we really thought about it from an augmentation perspective. The portrayal in this video just seems foreign, and thus potentially uncomfortable, because eye augmentation to see in the dark, or have information overlay on what you see is out of the norm for us at the time being. Another interesting thing to consider are memory aids.  We use memory aids a lot in our current lives.  Our phones have phone books in them, calendars, to-do lists.  If we don't remember the film that some actress was in we look it up on IMDB.  I remember about ten or so years ago I had a friend who vehemently opposed any sort of PDA (remember those? ;-) ) because he prided himself on remembering his credit card numbers, phone numbers, important information.  Sure, some information is important to remember without needing to look it up, however when you have memory aids for potentially less important information such as who was the actor who portrayed character-x in the 1999 remake of movie y, it frees your mind to remember, and work on, other more important things.  This way you are offloading (a computer term co-opted to describe a biological process) less important information to an external device to leave the main computer (the brain) to do other things.

The virtual displays on someone's arm reminded me a lot of the biotic enhancements that can be seen in the Mass Effect series of games (speaking of Mass Effect, the background music in Robbie reminded me of Mass Effect). The thing that really struck me in this video was the quote: "no one wants to be entirely organic."  This is an interesting sentiment, but what happens to the non-organic components when the organic dies? Supposedly the non-organic components cannot function on their own, so where does the id reside, and is it transferable to a data backup, to be downloaded to another body upon the organic components inevitable death?   The last question about this video is: when will it become a series on TV? ;-)

A quick comment on the gumdrop video: I loved it!.  It reminded me of a series on the BBC called creature comforts (sample video. In this series they recorded humans and they matched them with their animal personals (I guess), so the claymation animal was saying what the human had spoken.  Gumdrop could very well be a human voiced by a robot.

Finally, a quick note about the Robbie video. This video was a bit rough watch. The first thing that surprised me was that the space station was still in orbit after all those years. I would have assumed that eventual drift would cause it to come into Earth's gravity and cause it to crash. While watching the video I was wondering when this was taking place. I kept thinking "how old would I be in 2032?" and I made the calculation.  Then "How old would I be in 2045?" and I made the calculation, and then Robbie mentions that he (she? it?) has been waiting for 4000 years. At that point I stopped counting knowing that I would be long dead when Robbie's batteries died. When the robot mentioned that he lost contact with earth the first thing that came to mind was a scene from planet of the apes; specifically the end where the main character says "Oh my God. You finally really did it, you maniacs, you blew it up." I am not sure what that says about me, but I would surely hope that they weren't responding to this robot because things went sidewides on the surface of the planet.

From the Readings

Finally there were some interesting things that I wanted to point out from the various articles that we had for this final week. In Transhuman Declaration there was this belief or stance (italics my own):
Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
The thing that stood out to me was the invocation of morality.  I haven't really thought about the nature of morality in quite some time - or rather I haven't had to debate it; That said I am curious as to whether or not morality and moral behavior is a standard or expected standard amongst human beings, or whether it falls under the category of "common sense," which as we know common sense isn't all that common, but rather it's something that's made up of the cultural and lived experiences of the person who holds these things as common sense.  Is morality something that is malleable? Or is it a constant?  If it's malleable, what does that say about the expectation to act morally? If you harm or injure someone or something while trying to act morally does that negate or minimize the fact that you have actually harmed them or stepped all over on their rights?

The final article, for me anyway, was Humanism & Post-humanism here is something that got the mental gears working:
In addition to human exceptionalism, humanism entails other values and assumptions. First, humanism privileges authenticity above all else. There is a "true inner you" that ought to be allowed free expression. Second, humanism privileges ideals of wholeness and unity: the "self" is undivided, consistent with itself, an organic whole that ought not be fractured. Related to the first two is the third value, that of immediacy: mediation in any form--in representation, in communication, etc.--makes authenticity and wholeness more difficult to maintain. Copies are bad, originals are good. This leads some humanisms to commit what philosophers call the "naturalistic fallacy"--i.e., to mistakenly assume that what is "natural" (whole, authentic, unmediated, original) is preferable to what is "artificial" (partial, mediated, derivative, etc.).
What really got me about this is that in humanism there seems to be no space for the fact that while we do process things differently, we aren't really 100% unique as individuals.  The old adage of standing on the shoulders of giants goes beyond academic writing.  We are comprised of the sum (or more than the sum sometimes) of our experiences, which encompasses human relations, education, personal experiences, environmental factors and many many more things.  We can be clever, ingenious, and visionaries, but we weren't born with all of what we need, we acquired it along the way and it shaped us into who we are.  We can be authentic, but we can't be authentic without other people around. Others both shape us and allow us to show our individuality and elements of authenticity. Thus, while we may not copy verbatim, we do copy in some way, shape, or form while we remix that into something that makes it "new" and not a copy of something.

Furthermore, this whole notions of wholeness is a bit where I saw Carr's Is Google Making us Stupid? article. It seems that one of the laments (I won't go into everything in this article) is that people seem to skim these days, that they don't engage deeply because the medium of the web has trained us (or derailed us as the reading might imply) because there are way to many flashy things on the screen that vie for our attention.  I completely disagree.  Even when people had just plain-old, non-hypertext, books, things keep vying for our attention. If we are not interested in what we are reading it is more than easy to pick up that comic book, listen and sing-along to that song on the radio (or the MP3 player), or to call your friends and see they want to hang out.  Even when you're engrossed in the reading, in traditional, non-hypertext, materials if there are footnotes or endnotes that give you a lot of supplemental information, they take you out of the flow of your reading.  Deep reading isn't an issue that is technology related, bur rather a more complicated (in my opinion) endeavor which has to do with reader motivation, text relevance to the reader, text formatting and type-setting (ease of reading) and setup of the mechanics and grammar of the text, i.e. the more clunky or "rocky" the text, the more inclined the reader will be to skim or just avoid the text.  There are more critiques that I have of the Atlantic Article by Carr, but I'll limit it to this one.  Now, back to  Humanism & Post-humanism. Another interesting quote (italics my own) is as follows
most of the common critiques of technology are basically humanist ones. For example, concerns about alienation are humanist ones; posthumanism doesn't find alienation problematic, so critical posthumanisms aren't worried by, for example, the shift from IRL (in-real-life) communication to computer-assisted communication...at least they're not bothered by its potential alienation. Critical posthumanisms don't uniformly or immediatly endorse all technology as such; rather, it might critique technology on different bases--e.g., a critical posthumanist might argue that technological advancement is problematic if it is possible only through exploitative labor practices, environmental damage, etc.
This is a pretty interesting though, that most common critiques of technology are humanist ones.  It reminds me a lot of my Change Management course when I was an MBA student and the children's book who moved my cheese. Well, I saw it as a children's book, but it may not be. It's probably a tale that can be dissected and critiqued quite a lot from a variety of stances. The thing that stood out for me is the worry that technology has the potential to alienate by not having people communicate with one another in established ways, but what about people who are already not communicating well with established ways, but can use ICT to help assist with communication.  The usual example of this is are students in classes that are generally more timid or laid back.  In a face to face classroom, which has space and time limits imposed by its very nature, the students who are more outgoing and outspoken might monopolize the course time. This won't give learners who are not as outspoken an opportunity to chime in, or share their point of view, or understanding once they have processed the readings for the course, things that could move the conversation and learning forward in interesting and unforeseen ways. 

In an online course or a blended course, however, learners have more affordances that are not there in a strictly face to face course.  They have time to chime in, and thus the conversation can go on longer and thus more things can be teased out of a discussion topic.  Furthermore, students who aren't as outgoing in the face to face classroom have an opportunity to take the microphone (so to speak) and share with others what their thoughts are on the subject matter that is being discussed.  Instead of vying for that limited air time that you have in a face to face classroom, ICT has the potential to democratize the discussion that happens in the classroom by providing opportunities for all to contribute.  Technology by itself wont' be the panacea that makes this happen, let's not kid ourselves; there are many underlying foundations that need to be in place for students to use the affordances of ICT effectively.  That said, this is a case where ICT has the potential to bring together, not alienate fellow interlocutors and travelers on the path of learning.

So, what are you thoughts? :) How does Human 2.0 sound?

 Comments

#edcmooc - Where do you want to go today? Build that bridge to your utopia

 Permalink
So, we are at the end of Week 2 of #edcmooc and we are wrapping up the unit on Utopias and Dystopias, and everything in between (because thing is really that black and white). As with the week before there were some videos to watch and think about. I think that the no-lecture-videos format works well.  I like to see what people do with certain conversation starters and where they go with them. As I said last week, even though this course is run through coursera it's very much a cMOOC format to me.

One of the videos presented was the video bellow on bridging the future.  Honestly this video seemed really cool, and a nice proof of concept of what could potentially be done with technology. Students, in this case, seem to be using junior versions of tools, like CAD, that professionals use to do their work. This seems both useful to learn concepts, but useful to also begin learning the tools that are used in real life for these types of tasks.  The one concerning thing that I saw was the lack of books.  Don't get me wrong, I do my fair share of reading on eBooks, but those tend to be non-academic.  If I need to have several books open at the same time an eBook just doesn't cut it.  I don't have the money for five iPads to do what people did in Star Trek with PADDs. I am also wondering what the cost of these things are.  I know that the overall cost tends to go down over time, but I also considering the cost of not equipping classrooms everywhere with this, thus expanding the gap between the haves and have-nots. While this future is cool, it's no utopia, and it's no dystopia.  As I said before, one man's utopia is another's dystopia. What's important is what can we do with this setup that our current setup does not allow?

 


As a side note to the video discussion the video "A digital tomorrow" (see bellow) was pretty funny.  It may seem dystopic at first but I think that it's probably indicative of what the future may look like.  There will be some pretty interesting technology, but it won't work as well as the advertisements say it does, or as people imagine the future to be: flawless and everything works.  The visuals also reminded me of the jPod TV Series.

 

On the article front, the articles were pretty interesting reads, but I'll only focus on two articles: Metaphors of the Internet and the article on Peer Reviews vs. Automated Scoring.

The metaphoers articles brought me back to my days as a linguistics student (a few years back) with the mentioning of emic and etic perspectives.  It also reminded me of schema activation from my same applied linguistics work.  It was pretty funny to me how Rheinghold is painted as an internet critic and critic of "other forms of electronic communication [who] often cite[s] commodification as a problematic, destructive force on the Internet," especially since it was written in 2009 and by the Rheinhold seemed to have become an internet "convert" and advocating the harnessing of the internet and the social element in it to amplify our collective intelligence.  Is this just an honest oversight of the author? Or a case of selective bibliography or interpretation?

Metaphors are pretty good at getting people started with understanding a new thing.  They activate schemas in our existing knowledge that help connect what we are learning to what we already know. They are, however, only a beginning. Our understanding of the new should go beyond the connecting with the old. We should understand the nuanced differences of the old the new.  This article reminded me that when the internet was young, and I was starting to learn about it, I didn't have any metaphors for it.  Computing was also new to me,  my English was improving since Greek is theoretically my native language, and existing metaphors like "highway" really meant nothing because my notion of "high way" (Εθνική Οδός) was essentially no different than a long stretch of 2.5 lanes.  I guess my notion of the internet was a place to find things. Maybe the best metaphor I could come up with is the notion of the bazaar.

In the other article, one of the things that really came to mind was that there was way too much emphasis placed on the grading aspect of the essays (raw score) and not enough emphasis on the commentary aspect of essays.  When someone grades a paper, or any assignment that is something other than formulas, there are two aspects: the raw score from a rubric and comments on the essay. Even if someone gets a perfect score on their essay, that doesn't mean that they've reached the apex of their performance,  They can still do better and improve, and this is where instructor comments come in.  You can get 100% on an assignment, and at the same time you can improve your work. You do this by reading the comments from your instructor (or more knowledgeable other) and you apply those to your day to day work.  Mechanized, or peer grading, can give you the same raw scores for some very basic essays, but the commentary for improvement won't be there, not to mention that when essays stray from a prescribed format they will be graded wrong even if they are not.

Finally, in the forums there was also a lot of great activity. I went in an up-voted a few things that stood out to me, but it would take more than one, two, or three blogposts to discuss all the interesting sparks of the imagination in the forums.  For the time being I picked one thread that ties into my others MOOC thoughts.  This thread was: "Would you pay for a MOOC?" The question was:
Would you pay for a degree taught in MOOCs? More importantly, and a topic in and of itself, would businesses and industries hire people who have learned in this type of environment?
Jen Ross asked in this discussion:
Great post, Alan - maybe the question isn't so much 'would you' pay, but 'how much' is a MOOC worth? What is it that we pay for when we pay for education? 
I honestly think that the way things are today what we pay for is Accreditation.  Of course that presupposes a valid pedagogical model and faculty contact time, however one may measure that. This, in the US, seems to mean measurement of "butts in seats" time in many instances. So having a subject expert teach for a certain amount of time, and then passing some sort of summative examination ties together to give us accreditation. This may seem like a really bleak view of education, but with many people going to school for employment purposes, that seems to me to be the main impetus for payment of educational services.

I personally wouldn't pay anything for a MOOC. A MOOC is open, and thus, for me, free. The certificate of completion that coursera, udacity and EdX hand out at the end does not mean much in the real world at the moment. However it is a nice momento of my time in the MOOC! Like others said in the discussion thread, I would probably donate the cost of a cup (or two) of starbucks coffee toward the MOOC if it help support the infrastructure to make the collaboration possible. But, paying as a pre-requisite to participate - no.  Hamish Macleod pointed out that he contributes to wikipedia every now and again because it is a valuable tool for his job. I think that this is an apt analogy for MOOCs.  Furthermore, I do think of open in MOOC as free.  Content usually isn't open as in OER open, so open must be free.  Otherwise, what could open mean?  Open to enrollment?  So are collect courses, and have been for quite some time. So what? :)

I also liked this quote from Roberta Dunham
MOOCs are great ways to share learning without having to deal with the organized higher education syndicate.
I think Roberta hits on an important point, and one of the intents of the original cMOOCs. I think we've come full circle, and if we haven't yet, we may be pretty close.  Keep thinking freely :)

Last thought (more of a don't let me forget type of thing), the issue of accessibility came up this week in #edcmooc; accessibility of two types.  On the forums accessibility was discussed from a health standpoint with people with disabilities and access to MOOCs; and on twitter the issue of the digital divide (and I would add to that computer, information, and critical literacies) and access to MOOCs.  This is a major topic in my mind - but subject to a future post :)

 Comments

All MOOCs are online courses, but not all online courses are MOOCs...

 Permalink
Seems to me, that even though I dropped the Logic Course on coursera (loved those two professors by the way!), Logic is back to haunt me ;-)

I came across a blog post the other day through my RSS reader, which stated the following:

As massive open online courses (MOOCS) have exploded in popularity educators are coming under increasing pressure to make an effective use of the new technology. To help instructors realize the potential of the new content delivery platforms Georgia Tech is unveiling a MOOC about creating a MOOC.
To be honest, the first thing that came to mind was the following question: What does Georgia Tech know about MOOCs and MOOC pedagogy? I didn't recall anyone off the top of my head that was from Georgia Tech that has been involved with MOOCs a lot over the past couple of years.  In any case, I followed the link and the link of this quick news blurb links to the "Fundamentals of Online Education".  The course isn't about MOOCs but, it seems, about "traditional" online course design and pedagogy (similar to the one I am teaching as a matter of fact ;-)  ).

Last year, I noticed a number of blog posts and opinion pieces essentially equating MOOCs with "online education." I essentially discounted these posts because they were few and far between and I didn't think that there was an endemic perception that MOOCs = Online Course.  But now, it seems like the problem is more wide spread.

I am not saying that MOOCs aren't online courses (the OC in MOOC stands for Online Course).  I just believe that it is a fallacy to equate ALL online courses with MOOC.  We don't do this with on-campus courses, do we?  After all, a 10 person seminar isn't taught the same way as a 500 person auditorium lecture course. I think that by equating MOOCs as THE online course we are simultaneously doing harm to both MOOCs and online courses.  By equating the two, we are transposing one's failures to the other, and one's pedagogical assumptions to the other.  You can't design a MOOC like you design a "regular" "traditional" online course, and you can't take the same success measures from a MOOC and apply them back to the traditional online course (and, of course, the reverse applies in both stipulations).  So, next time you hear someone confusing a MOOC with an online course, do them a favor and correct them ;-)

 Comments (1)