Club-Admiralty

v6.2.1 - moving along, a point increase at a time

It's the end of the MOOC as we know it, and I feel...

...ambivalent?  I am not sure if ambivalence is the word I am going for because I am getting hints of nostalgia too.  Perhaps though I should take a step back, and start from the beginning.

This past weekend two things happened:

The first thing is that I've completed reading full books as part of my literature review for my dissertation, and I have moved onto academic articles, articles I've been collecting on MOOCs and collaboration in general. While MOOCs aren't really the main focus of my dissertation study, they do form the basis, or rather the campgrounds on which the collaborative activities occurred on, and it's those collaborative activities I want to examine. This review of MOOC articles (while still relatively in the early stages) made me reflect back on  my own MOOC experiences since 2011.

The second thing is that I received a message from FutureLearn which was a little jarring and made me ponder.  Here is a screenshot:



My usual process, when it comes to MOOCs these days, is to go through  the course listings of the usual suspects (coursera, edx, futurelearn) and sign-up for courses that seem interesting.  Then, as time permits I go through these courses.  I usually carve out an hour every other Friday to do some MOOCing these days since most of my "free" time is spent on dissertation-related pursuits.  It would not be an understatement to say that I have quite a few courses that are not completed yet (even though I registered for them about six months ago).  What can I say? I find a ton of things interesting.

If you're new to MOOCs you might say "well, it was a free course, and now it's going back into paid land - you should have done it while it was available". Perhaps you're right, perhaps not.  For a MOOC old-timer, like me (ha!), this type of message is really disheartening, and it really speaks quite well to the co-opting  and transmogrification of the MOOC term (and concept) and making something that is not really recognizable when compared to the original MOOCs of 2008-2012; or perhaps it's a bit even like an erasure - erasing it form the past, but luckily at least articles exist to prove that it existed, and cMOOC is still recognized as a concept.

I am convinced that platforms like coursera and futurelearn can no longer be considered MOOC platforms, and should be referred to  as either a learning management system (which they are), or online learning platform. Over the past few years things that seemed like a given for an open learning platform are starting to not be there.  First the 5Rs started being not applicable.  You couldn't always revise or remix materials that you found on these platforms...but you could download copies of the materials so that you could retain your own copy, and this meant that you could potentially reuse and redistribute.  Redistribution was the next freedom that went,  and after that was reuse.  You could still download materials though (at least on coursera and edx).  Then a coursera redesign made video download not an option... (still an option in edx, not sure if it was an option in futurelearn), and now courses are becoming time-gated... argh.

The certificate of completion was an interesting concept - a nice gift from the people who offered the course if you jumped through their hoops to do the course as they intended, but it was really only valuable when it was free of cost. This freebie has also been lost (not a great loss since it doesn't really mean much - at least not yet).

All of this closing off of designs and materials (closing in a variety of ways) makes me long for the days gone by, day not long ago, and MOOCs only about 10 years in the past.  Although, I suppose in EdTech terms 10 years might as well be centuries.

I do wonder when might be a good time to reclaim the name and offer up connectivist courses again - or perhaps it's time to kill the term (wonder what Dave thinks of this ;-) ), and create something that doesn't have such  commercial interests infused into it right now.

Thoughts?

View Comments

Instructional Designers, and Research

Yet another post that started as a comment on something that Paul Prinsloo posted on facebook (I guess I should be blaming facebook and Paul for getting me to think of things other than my dissertation :p hahaha).

Anyway,  Paul posted an IHE story about a research study which indicates that instructional designers (IDers) think that they would benefit from conducting research in their field (teaching and learning), but they don't necessarily have the tools to do this.  This got me thinking, and it made me ponder a bit about the demographics of IDers in this research. These IDers were  in higher education.  I do wonder if IDers in corporate settings don't value research as much.

When I was a student and studying for my MEd in instructional design (about 10 years ago), I was interested in the research aspects and the Whys of the theories I was learning. I guess this is why further education in the field of teaching and learning was appealing to me, and why I am ultimately pursuing a doctorate. I digress though - my attitude (inquisitiveness?) stood is in contrast with fellow classmates who were ambivalent or even annoyed that we spent so much time on 'theory'.  They felt that they should be graduating with more 'practical skills' in the wizbang tools of the day.  We had experience using some of these tools - like Captivate, Articulate, Presenter, various LMSs, and so on, but obviously not the 10,000 hours required to master it†. Even though I loved some classmates (and for those with who are reading this, it's not a criticism of you! :-) ), I couldn't help but roll my eyes at them when such sentiments came up during out-of-class meetups where we were imbibing our favorite (hot or cold) beverages.  Even back then I tried to make them see the light.  Tools are fine, but you don't go to graduate school to learn tools - you go to learn methods that can be applied broadly, and to be apprenticed into a critical practice.  As someone who came from IT before adding to my knowledge with ID,  I knew that tools come and go, and to have a degree focus mostly on tools is a waste of money (and not doing good to students....hmmmm...educational fast food!). I know that my classmates weren't alone in their thinking, having responded to a similar story posted on LinkedIn this past summer.

My program had NO research courses (what I learned from research was on my own, and through mentorship of professors in my other masters programs). Things are changing in my former program, but there are programs out there, such as Athabasca University's MEd, which do work better for those who want a research option.

Anyway, I occasionally teach Introduction to Instructional Design for graduate students and I see both theory-averse students (like some former classmates), and people who are keen to know more and go deeper. I think as a profession we (those of us who teach, or run programs in ID) need to do a better job at helping our students become professionals that continually expand their own (and their peer's) knowledge through conscious attempts at learning, and research skills are part of that.  There should be opportunities to learn tools, for the more immediate need of getting a job in the field, but the long term goal should be setting up lifelong learners and researchers in the field.  Even if you are a researcher with a little-r you should be able to have the tools and skills to do this to improve your practice.

As an aside, I think that professional preparation programs are just one side of the equation.  The other side of the equation. The other side is employment and employers, and the expectations that those organization have of instructional design.  This is equally important in helping IDers help the organization. My conception of working with faculty members as an IDer was that we'd have a partnership and we'd jointly work out what was best based on what we had (technology, expertise, faculty time) so that we could come up with course designs that would be good for their students. The reality is that an IDer's job, when I did this on a daily basis, was much more tool focused (argh!).  Faculty would come to us with specific ideas of what they wanted to do and they were looking for tool recommendations and implementation help - but we never really had those fundamental discussions about whether the approach was worth pursuing anyway. We were the technology implementers and troubleshooters - and on occasion we'd be able to "reach" someone and we'd develop those relationships that allowed us to engage in those deeper discussions. When the organization sees the IDer role as yet another IT role, it's hard to make a bigger impact.

On the corporate side, a few of my past students who work(ed) in corporate environments have told me that theory is fine, but in academia "we just don't know what it's like in corporate" and they would have liked less theory, more hands-on for dealing with corporate circumstances. It's clear to me that even in corporate settings the organizational beliefs about what your job as an IDer is impacts what you are allowed to do (and hence how much YOU impact your company). Over drinks, one of my friends recently quipped (works in corporate ID, but formerly on higher education) that the difference between a credentialed (MEd) IDer and one that is not credentialed (someone who just fell into the role), is that the credentialed ID sees what's happening (shoverware) and is saddened by it. The non-credentialed person thinks it's the best thing since sliced bread‡. Perhaps this is an over-generalization, but it was definitely food for thought.

At the end of the day I'd like to see IDers more engaged in education research. I see it really as part of a professional that wants to grow and be better at what they do, but educational programs that prepared IDers need to help enable this, and organizations that employ them need to see then as an asset similar to librarians where they expect research to be part of the course to be an IDer.

Your thoughts?


MARGINALIA:
† This is obviously a reference to Gladwell's work, and the 10,000 hours of deliberate practice.  It's one of those myths (or perhaps something that needs a more nuanced understanding). It's not a magic bullet, but I used it here for effect.
‡ Grossly paraphrasing, of course
View Comments

Ponderings on predatory journals


I originally posted this as a response in a post that Paul Prinsloo wrote on facebook (in response to this Chronicle Article on Beall's list and why it died), but it seemed lengthy enough to cross-post as a blog post :-)
--------------

So many issues to dissect and analyze is such a (relatively) brief article. It is important to see and analyze predatory journals (and academic publishing) in general systematically with other trends in academia. This includes the fetishization of publish or perish, and the increased research requirements to even get a job in academia (see recent article on daily noos as an example)

One thing that bugged me was this line --"Good journals are not going to come to you and beg you for your articles. That should be your first clue." There are legitimate journals out there that are new, and hence don't have any current readership because they are new, so they can't necessarily rely on the word of mouth to get submissions for review. I am helping a colleague get submissions for for upcoming issues (shameless plug: http://scholarworks.umb.edu/ciee/ ), and we certainly solicit submissions from within our social network (and the extended social network). We don't spam people (perhaps that the difference), but the social network is used for such purposes.

I also don't like the idea of categorization of 'high quality' and 'low quality' . Anecdotally I'd say that what passes as high quality tends to (at least) correlate with how long they've been in the market, the readership they've amassed over the years, and the exclusivity they have developed because of this (many submissions, few spots for publication). Exclusivity doesn't necessarily mean high quality, and a high quality journal doesn't necessarily mean that a specific article is high quality (but we tend to view it under that halo effect).

At the end of the day, to me it seems that academics are equally susceptible to corporate interests as other professionals. True freedom to say what you need to say sometimes requires a pseudonym - sort of like the Annoyed Librarian...
View Comments

Validity...or Trustworhiness?

It's been a crazy few days!  If it weren't for my brother coming down to hangout for a while I probably would have more in common with Nosferatu than a regular human being😹 (having been stuck indoors for most of the weekend).  When I started off this summer I gave myself a deadline to be done with my methods chapter by August 30th (chapter 3 of my proposal).  After reading...and reading...and reading...and re-reading (select articles form EDDE 802), I reached a point of saturation when it comes to methods.  I really wanted to read all of Lincoln & Guba's 1985 book called Naturalistic Inquiry during this round, but it seems like I will just need to focus on specific aspects of the book.

So, in this whirlwind of activity, I went through the preamble to my methods section, my target participant descriptions, my data collection, my data analysis techniques, and any limitations.  I added to these sections, explicated, went more in-depth in each section, I corrected issues that were brought up by Debra in EDDE 805, some outstanding issues and comments from the feedback on the parts I had worked on MDDE 702, and some of the initial comments I got back from my dissertation chair. **phew** That was hard work!  The only parts that I still have left to complete in order to be "done" with my methods section are (1) The ethics section; (2) the validity/reliability/bias section;  (3) a conclusion section for the chapter bringing it all together; (4) an appendix with a sample survey; and (5) an appendix with the participant consent form.  I am considering adding (6) the REB application to an appending as well before I call this section "done".  I am not sure if I will be done by August 30th (as was the original plan) but I think I will be damn close.

That said, there is one thing that is tripping me up, and that has to do with the validity/reliability/bias section.  Bias is actually not that hard.  I think I can write up procedures and things to be on the lookout for in order to avoid bias in both data collection and analysis.  The thing that  is much more concerning is philosophical:  Do I go with Trustworthiness, Credibility, Dependability, and Transferability as what I talk about in this section (coming from the Lincoln and Guba tradition of qualitative research work)?  Or do I choose the more traditional Validity and Reliability and discuss my methods in that frame of reference?  Of course, for qualitative studies the Lincoln & Guba approach makes sense (at least it does to me, and it's references in a variety of other texts I've read on qualitative approaches to research), but at the same time quite a few of the texts that I've read (both on case studies and on qualitative studies in general) still use reliability and validity as terms in qualitative researcher. So, so I "translate" validity and reliability (from the texts) in Lincoln & Guba terms?  Just discuss in the framework of Lincoln & Guba? or try to smash both together?  Perhaps start with Validity & Reliability and transition to L&G terms since they make sense?    I need to re-read Chapter 11 of Naturalistic Inquiry this week to help make up my mind (any thoughts are more than welcome in the comments).

As of this point I am at 23 pages (with 1 paragraph of lorem ipsum text, and quite a few scraps of though patterns for items 1 and 2 above), so I am thinking I should be wrapping this up soon and not getting logorrhea.
View Comments

The publication emergency


Paul Prinsloo has a wealth of thought provoking posts on his facebook ;-)  I wasn't planning on blogging until tomorrow, but this got my mental gears moving and thinking (not about my dissertation, but it's thinking nevertheless).  This blog started as a continuation of a comment I left on Paul's facebook feed. The article that got me thinking is an article on the Daily Nous titled The Publication Emergency.

In the article a journal editor (in the field of philosophy) opines (although not with his editor hat on) that graduate students (I guess this means doctoral students) should be barred from publishing until they are done with their degree. He says that this is not a barring of people who don't hold a doctorate, but rather of people who are in process of earning their doctorate.  So, in theory, some with an MA, but not pursuing a doctorate would be welcome to publish their stuff.  So, even if an article is good and has merit, if its author is in process of earning a doctorate they'd get an automatic rejection.

It was an interesting read, and a good thought piece - and in all honesty I think that that since the author is a philosopher he is either trolling us or trying to get us to think more deeply about this topic. I certainly hope so at least because I don't think there is a way for me to disagree more with the position expressed (my disagreement has reached 11 😉 ).

The proposal seems to be trying to address the problem of the proliferation of credentials amogst the professoriate, of which I'd argue that publications is one kind of credential.  And, this proliferration of credentials is making it much more competitive for those in the professoriate at a variety of levels.  To deal with this problem the proposal is to arbitrarily eliminate one segment of the population, barring them from publications. It's arbitrary because someone without a doctorate, as mentioned above, is still NOT barred from publication, except if they are in the middle of earning one. So someone like me publishing prior to entering a doctoral program is fine, but if I enter that gray zone it suddenly is not? (not trying to make this about me, I promise 😛 ).

I see the problem as being more systematic, and the credentialing is a symptom of a larger issue (something we see at level beneath the doctoral level as well!)   Doctoral programs have increased and saturated the market.  Doctoral programs have gone from artisanal experiences, where few people applied, fewer were selected, cohorts were smaller and members shared the 'pain' of the experience. Sort of like going through an elite military program, but in academia.  Nowadays doctoral programs have (mostly) morphed to being cash cows for universities. More PhD† students graduating means more competition in the tenure stream job market; that is of course unless your program was thinking ahead and designed doctoral programs whose goals are not just to create new tenure stream faculty, but also work in other facets of society. More programs means more candidates competing for fewer jobs and resources (journals being  one of those resources). Hence, you need additional credentials to make it, such as doing post-docs (which I'd call exploitative labor), other type free work and volunteering, more papers published, etc., in order to get an edge.  It reminds me a lot of high school when people joined as many extracurriculars in order to pad their college application 😒

So, now you have a problem, my friend! You see, more early academics‡ publishing because current professors claim that they don't have the ability to publish as much because the journal market is saturated - with the author of the article citing 500-600 submissions to a journal per year. Add this to the budding market of predatory journals, and you've got a really bad mix.  An invalid argument that was proposed in the article is "well if the argument of a graduate student is good now, imagine how good it will be in the future!"  I don't buy this.  Academia has setup a perverse set of incentives, including keeping track of citations.  If the goal is to write something good and get citations for it, why wait? Ideas don't have a monogenesis, so if you wait someone else will beat you to the punch. Why handicap younger scholars who should actually be apprenticed into this? Another proposal is to not have anything published before you got your current tenure-stream job not count, so that people can see what you can do on the job.  This also made me roll my eyes a bit. 🙄 Why practice selective amnesia? Simply to sour the milk and prevent people from doing something?

I think academia needs to seriously look systematically at the perverse incentives it has provided over the past 40 years♠ and deal with those rather than just dealing with a symptoms in ways that are arbitrary, and at best, a bandaid on a bleeding patient.  Doctoral students should be encouraged to publish, it's good practice for those who want to go into the professoriate.  Why not have the benefit of mentorship while undertaking this?  Furthermore, the publish-or-perish modus operandi for academic units needs to finally be put to rest.  Faculty positions need much more granularity than what the current system allows for.  Faculty hired as lab scientists should be evaluated more on their research and publication performance. Faculty who do mostly teaching should be evaluated more on that.  Tenure and academic freedom should not just the playground of research PhDs, and we shouldn't try to shoehorn everything into a research PhD framework. This to me seems like a vestige of positivism, and our profession needs to think more seriously about a more dynamic, better representative, professoriate.  More holistic means of evaluation will most likely take the pressure of the publication system.



NOTES
† PhDs is just my short term for any sort of doctorate, I don't really want to get into the whole PhD/EdD thing...
‡ By early academic I don't just mean tenure-track folks, but doctoral students
♠ just an estimate on my part, based on my conversations with others in field more time than I
View Comments

Campus deadzones, and creepy hallways: where did everyone go?

Found image on Google
(not actually a photo of me)
Happy Friday dear readers! (umm...anyone still there?  I swear! I am alive! 😆)

I've been attempting to write a blog post all week (and trying to do the 10 minutes of writing per day), but I've been failing on that account...I guess Fridays are a better day as things wind down from the week.  In any case, there is an article from the Chronicle that's been on my mind this week titled "Our Hallways are too quiet". Our department chair sent this to us (everyone in the department) as a thought piece, perhaps something to ponder and discuss in the fall - probably because our department is also like the department that is described in the article.

I had a variety of cognitive and emotional processes go off, and get gears grinding while I was reading this.  I actually hadn't noticed that the author was from MIT...who only recently "discovered" online learning (like Columbus discovering the New World).  Yes, I am a little snarky, but I also think that your frame of reference is really important.  If you are a Bricks and Mortar institution what you consider "community" might look different from an institution that is focused on distance education (or at least has a substantial DE component).  But I think I am getting ahead of myself.  Let me just say this:  My job title is "Online Program Manager" - as in the person who runs the online components of a specific MA program.  Having been on campus for close to 20 years now, in a variety of roles, I can see both sides.  I think this particular article is really biased, in ways that their author doesn't even get.

Let's start with this:
Entire departments can seem like dead zones, and whole days can pass with only a glimpse of a faculty member as someone comes to campus to meet a student, attend a meeting, or teach a class. The halls are eerily quiet. Students, having figured this out, are also absent. Only the staff are present.
This excerpt, as well as the rest of the article, is very faculty-centric.  As if the faculty (or this particular faculty member anyway) are the only ones who suffer any consequences from creepy hallways.  In my most recent job (headed into my 6th year soon!), and my first in an academic department, I've experienced the demoralization that comes with absence of colleagues.  In all of my others jobs on campus I've always had colleagues around (with the exception of vacations and such).  Whereas in an academic department I didn't (don't) always see people.  In my induction period (when I was getting the lay of the land and doing a SWOT analysis of the program I was managing so I could be more effective) Mondays through Thursdays I'd at least see my fellow program managers and faculty here and there, but on Fridays it almost felt like being in the movie I Am Legend.  Granted, this didn't bother me back then because there was a lot of paper records to go through and make heads and tails out of everything. Being busy meant that I didn't really mind being alone.  Once all paper was organized, made sense of, and work could be done remotely, the big question that comes to mind is this:  Well, if I can do my work remotely, and I don't have to deal with the x-hour commute, why would I need to go in?  especially for someone who manages a distance learning program.  If one group of employees (faculty) can work remotely (effectively) why not another group whose job duties are conducive to it?  I do agree with one point made above:  students having figured out that faculty aren't there are also not there; but there is a big caveat here:  who are your students? Students in my department are (by and large) working adults, so even if faculty were around it doesn't mean we'd suddenly have students sitting around in semi-circles, drinking their dunkies coffee (local affectionate term for Dunkin' Donuts) and discussing Derrida.  If you think that way, you're living in a fantasy.  Student demographics matter.

Goin' onto the next point. The author writes about faculty avoid the office for a variety of old fashioned reasons, such as not being able to get work done, avoiding feuds, and avoiding time-sinks like watercooler talk, but then she turns her attention to the perennial foe: technology!
A big reason for decreased faculty presence in their campus offices is technology. Networked computers that allow one to write anywhere also allow us to have conversations with students and colleagues that used to take place in person. Creating new course materials and ordering books is easily done online. Cloud software has made pretty much all our work processes easily done from home, a vacation cabin, a foreign conference hotel. For many scholars, this has been a very liberating occurrence, giving them wondrous flexibility.
Pardon me, I don't know you, but I call 💀💢🐄💩😡 on this argument.  Yes.  technology has facilitated certain efficiencies, like not having to fill out a form in triplicate, or not having to wait overnight for a journal article query that only returns title and abstract of potentially relevant articles to you.  Technology has not caused faculty not to want to come to the office.  Other organizational factors play a major role in the day to day decisions on whether or not to work remotely.  When research productivity is sought more, then people will do what they need to do to be more productive in their research.  If community engagement, service, teaching, or other aspects of the professoriate are valued more, than people will gravitate toward those.  I basically comes down to incentives, and when there is little incentive to be on campus to meet those objectives, then you will undertake them at a place that is most convenient for you.  I think a lot has to do with the expectations set forth by the institution, the institutional culture, and by extension the departmental culture.  Sure, you can have a department chair (the head honcho in an academic department) mandate that everyone (yes, including faculty) have to be there 3 days per week, and put in at least 10 hours of  'face time' into the department during regular business hours (9-5).  That's really only 3 hours per day. Does 3 hours per day really build community?  Nope.  Does 3 hours per day guarantee that people will be there on those same days and hours?  Nope.  This is the equivalent of butts in seats, for no good reason.  It's as anachronistic as forcing students to endure a long lecture just because you haven't through of your pedagogies.  First you determine what your root goal is (and no, more face time isn't a worthy goal), and then you hatch a plan to get there, while at the same time taking into consideration the various local variables, norms, and expectations (heck, maybe those need some rethinking too!)

Every time I hear about technology as the "big bad" I am reminded of the rebooted (and cancelled) Thurdecats.  From the fan wiki article (with my own annotations in brackets):
Most citizens [of Thundera] abhorred technology, denying the existance of machinery entirely and leaving thoughts of such things as fairy tales. This belief was a major contributing factor to their destruction as the lizards [their enemy] attacked them with advanced bipedal war machines Warbots while the ThunderCats fought with bows and arrows.
Just an interesting side-trip - take it as you will 😂

Anyway, moving along, finally, I see a conflation of the sense of community with face time, and they are not the same thing.  The author writes:
Some would argue that worrying about departmental community is ridiculous. After all, professors aren’t hired or promoted on the basis of departmental relationships, or civic engagement, and most faculty members desperately need quiet time in which to do research and write. True enough. As my colleague, Sherry Turkle, has argued: Conversation matters. Personal contact matters. It is very hard to build relationships with people we do not see in person, and such relationships are the bedrock of so much else that matters on any campus.
I think community is important.  However just because someone is not in their office at the same time YOU are in your office doesn't mean that you can't have community.  And just because you re not meeting face to face doesn't mean that you aren't communicating.  And just because you aren't meeting face to face doesn't mean that you aren't having personal contact! I've had lots of meaningful conversations, and personal contact with my many distance friends, family, and colleagues over the year.  From my doctoral cohort, to vconnecting friends and colleagues (sorry I've been a ghost - dissertation is sucking my mental energy), to colleagues who are geographically dispersed.  Every time I hear of Sherry Turkle I can't help but roll my eyes. Yes, face to face is nice.  Yes, I like face to face sometimes, but face to face ain't the end all be all of conversations, connections, communities, and work.  Yes, we do need community.  Without it we are just a loosely joined confederation of people maybe striving toward a common goal (maybe not), but with community we become stronger, and we get smarter.  But community can be achieved in a different ways (look at vconnecting for example).

To wrap up, I am reminded of a joke, or something that one of my mentors (Pat Fahy) kept saying "It's the parking, stupid!".  This was the response to the question "why do students pursue distance education?".  Of course, this is just one piece of the puzzle; others being things like mobility issues, health issues, childcare, elder-care, working two (or more) jobs, and so on.  I think in an era where we are offering some really great distance education programs (oh yeah...welcome to the party, MIT), and we've seriously considered what makes a good online program for our disciplines in order to get here, it would behoove us to also look at what makes our jobs effective and how we can effectively build communities of various modalities.  Forcing grown human beings to have face time so that they form community is the equivalent of having your kids forced to stay with "weird uncle mike" or grandma, because you feel like your kids need a connection with the rest of your family, but you haven't bothered making them part of your family in the day to day, except only on holidays.  Both kids, and adults, resent such forced actions.  We can do better.  Just sayin'

OK, now that I've ranted on 😏 - what do you think? 😃


View Comments

University Education, the Workplace, and the learning gray areas in-between


Many years ago, maybe around 16 years ago, I was sitting in the office of my computer science major advisor, getting my academic plan for next semester signed off on.  My computer science program was actually an offshoot of the mathematics department, and until recent years (2003?) they were one and the same.  My advisor, while looking at my transcript, noticed that (on average) I was doing better in language courses rather than my computer science courses; which was technically true, but many courses designated as CS courses (and ones that were required for my degree) were really math courses, so you need to do a deeper dive to see what I was doing better in.

I never really forgot what he said next.  He said I should switch major; and it was odd that he didn't offer any suggestions as to how to improve†...  Being a bit stubborn (and relatively close to graduation) I doubled down and completed my major requirements (ha!).  During this chat I told him that I really wish there were more coursework, required in my degree, in additional programming languages because that is what I was expected to know when I graduated for work. His response was I could learn that on the job... needless to say, my 20-year-old self was thinking "so why am I majoring in this now, anyway?"

Fast forward to the recent(ish) past, flashback brought to your courtesy of of this post on LinkedIn. I had recently completed my last degree (this time in Instructional Design) and I was having coffee with some good friends (and former classmates). We were a year or so out of school. Two of us already had jobs (same institutions as when we were in school) and one was on the hunt. His complaint was that school didn't prepare him for the work environment because he didn't know the software du jour (which at the time were captivate and articulate). I did my best to not roll my eyes because software comes and goes, but theory (for the most part) really underlies what we do as professionals. In class there wasn't a dearth of learning about software, but there were limitations: namely the 30-day trial period of these two eLearning titles.  So we did as much as we could with them in the time we had with them, and we applied what we learned from the theoretical perspective.  No, we didn't spent a ton of time (relatively speaking) on the software because that sort of practice in a graduate program should really be up to the learner, and it would cost them.  Captivate cost $1100 for a full license, while articulate costs $999/year to license. That cost is actually more than double what the course cost! Furthermore, it privileges one modality (self-paced eLearning) and two specific elearning titles. The fact of the matter is that not all instructional designers do self-paced eLearning, enabled by these titles. Not all instructional designers are content developers‡. I find the author's following suggestion a bit ludicrous:

To replace the non-value add courses, decision makers can study current open job descriptions, and ignore academic researchers' further suggestions. Programs can then be revolutionized with relevant course topics. These new courses can include relevant production tools (e.g. Storyline, Captivate, Camptasia, GoAnimate, Premier, etc.) and numerous cycles of deliberate practice, where students develop a course on their own, and receive the feedback they need. This will make hiring managers very happy.
While I do see value in learning specific technologies, that's not the point of a graduate degree, and graduate courses should prepare you to be a self-supporting, internally motivated learner.  Courses should give you the staples that you need to further make sense of your world on your own, and to pickup tools and know-how that you need for specific situations♠.  Focusing a graduate degree on production tool is a sure way to make sure to really ignore the vast majority of what makes instructional design what it is. Practice is important (i.e. building your learning solutions) but it's not the only thing that's important. I also do think that employers need to do a better job when posting instructional designer job descriptions, but that's a whole other blog post.

I do think that if you are new to any field you (as a learner) should be taking advantage of any sorts of internships, where the rubber (theory) meets the road.  In some programs internships are required, and in others they are optional.  I do think that internships are an important component for the newbies in the field.  When I was pursuing my MA in applied linguistics, and being in a program that focused on language acquisition and language teaching, the field experience (aka internship) was a requirement.  People with classroom teaching experience could waive the requirement and take another course instead, but for me it was valuable (as much as I had to be dragged to to kicking and screaming).  In hindsight, it gave me an opportunity to see what happens in different language classrooms, something I wouldn't have experienced otherwise.

So, what are your thoughts? What do you think of the LinkedIn article?


Notes:
† I guess this must have been a problem with advising in the college in general because years later the college of science and maths put together a student success center.  They were probably hemorrhaging students.

‡ I suspect this is another, brewing, blog post.

♠ So, yeah...Years later I see some of wisdom of my advisor.  I think he was partly right, in that I should be able to pick up what I need once I get the basic blocks, but I think he was wrong to suggest for me to change major, and I do think that less math, more computer science with applied cases would have been better as a curricular package.
View Comments

MOOC CPD & SpotiMOOCdora

Last week (or was it two weeks ago?) I did my rounds on coursera, edx, miriadaX, and futurelearn and I signed up for a few new MOOCs.  I had also signed up for a course that a colleague was promoting on Canvas (innovative collaborative learning with ICT), but I've fallen behind on that one, not making the time commitment to participate.  The list of missed assignments (ones that I can no longer contribute to) actually is demotivating, even if my initial approach was not not do many assignments (or rather, play it by ear, and decide on whether I'd like to do some assignments during the MOOC). Maybe this coming week I'll 'catch up' in some fashion ;-).  The interesting thing is that there is a forum in Greek in that MOOC, which is motivational to see what my fellow Greek are doing in the arena of ICT and collaboration. I guess I still have a few more weeks before the MOOC ends...

Anyway,  I digress (probably not good practice for the dissertation).  Today's post was spurred by a recent essay on the MOOC on Inside Higher Education, where the author looked at her prognostications and examined them in the light of information we currently have about MOOCs. It is a little disheartening that the original MOOCs (connectivist MOOCs) are sort of gone (at least I don't really see a ton of connectivist stuff happening these days), and the xMOOC variety seems to be going more and more toward money making.  Even with the MOOCs I've just singed up for, there really isn't an option for a free certificate anymore.  You can still go through the course - which I am to do on my own sweet time (opportunity to explore the classics), but even a basic certificate is not free any longer. Another thing that going into this mix is thinking about continual professional development. In the two departments I am mostly connected with (applied linguistics and instructional design) graduates of these programs often need PD credits in order to maintain a teaching license, or to continue to hone their skills. Usually this is done through free webinars, in-service training, or taking additional graduate courses (depending on your field of course). This got me thinking about two things: MOOCs as CPD (which isn't really a new idea), and the all-you-can-eat MOOC (or SpotiMOOCdora - after services like spotify and Pandora).

My first pondering is this:  given that institutions such as Georgia Tech are offering a $10k MA in the MOOC format, why not consider a smaller leap into CPD (professional development courses)?  I know that maybe doing an entire MA might be a bit of leap for most institutions, heck even a certificate might be a bit of a leap (aka 'micro-masters' in the MOOC world), but CPDs have a different set of expectations and requirements, and they are often not available for graduate credit (some are, but most in my experience are not). I think it would make a ton of sense to develop professional development courses in a MOOC format, that are available for free for a target audience (let's say teachers of high school biology).  The payment can come in the form of assessment, or an in-person fee for a facilitator that brings together the course content of the MOOC (that people have done previously) in an active learning paradigm.

The second pondering is this:  Is there a market for either an all-you-can-eat month-to-month subscription to a MOOC? An example of this would be Amazon Prime video, Netflix, Hulu, Pandora, and so on.  If not all you can eat, how about a model that's more like Audible, where you get a book per month and you can spend your unused tokens anyway you want (if you are still working on a book, you can bank the token for another month for example).  If either of these models works, then what would be an appropriate price?  Netflix and Spotify at $10/month; audible is $15/month for example.  The reason I am pondering this had to do with the costs of certification.  I don't know what the secret sauce in certification is, but edx is asking me for $200 to get a certified certificate of completion (this sounds redundant).  What does $200 get me?  I don't get college credit for it, and (for me) the joy of learning is internal, so $200 is better spend elsewhere. For instance $200 gets me lifetime subscription to my favorite MMORPG...when said subscription is on sale (lots of hours of fun and additional content). Comparatively the edx certificate seems like a poor value proposition.

What do you think about these ideas?  Does a monthly subscription MOOC make sense?  What is the value proposition?  And, can we resuscitate the cMOOC?   Thoughts?
View Comments

Academic Identities, Terminal Degrees, power of the network...

It's been a while since I last just sat down to think and write about something (like the good old days when I was cMOOCing...).  These past few weeks have been about conferences, and getting back on track with my dissertation proposal (although I think I am the only one who is keeping a score on that at this point).

In my attempt to get back to writing, and engaging with friends and colleagues out there in the wild blue yonder which is the internet, I thought I would pick through my accumulated Pocket list until it's almost empty.  One of the ponderings of interest came by means of an article on Inside Higher Ed titled Academic Identities and Terminal Degrees, where the overall question was: Does one need an academic terminal degree to identify professionally with that discipline? And, as Josh goes on to explicate

Can only someone with a Ph.D. in economics call herself an economist? Do you need a Ph.D. in history to be a historian? How about sociology and sociologist? Biology and biologist? Anthropology and anthropologist?

My views on the topic have changed in the past fifteen years; where I basically compare my views as someone who just finished a BA, to my current views...on the road to a earning a doctorate (are we there yet? 😂).  Originally I would have said that someone could call themselves something only if they've earned a degree in that field. I think today I would call that by the term protected professional title, and a degree or some sort of certification would be a way to demonstrate that you've been vetted into that profession somehow by somebody. Now, which titles (economist, linguist, archaeologist, biologist, etc.) are protected, and up for grabs...well...that's a subject for debate! At the time the only means of obtaining that expertise (in my mind) was through formal degree programs.

Since that time, in addition to completing a few masters programs and discovering new fields and new knowledge, I've also discovered the power of the network, the potency of communities of practice,  groups such as virtually connecting, and expanding my own learning and practice outside of the classroom.  My current feeling is that it's not really as black and white at my younger self thought.  I do think that obtaining a doctorate in the field is one path to getting there, but it's not the main criterion to developing your identity in that field.  The main criterion that I have (at this point in time anyway) is practice and expansion of your own skill set in that field. I guess a good way to describe this is through some examples that came to mind while I was trying to tease it out for myself:

Example 1: The non-practicing PhD
A few years ago I was a member of a search committee looking to fill the position of a program director for an academic program at my university. Among the requirements for this position was a terminal degree (PhD or EdD being defined in the job search posting).  We got a variety of CVs from interested applicants.  In reviewing CVs I noticed an interesting cluster of applicants: those who had earned a terminal degree (four, five, six, ten) years ago, but had no publications (or other academic work) under their name other than their dissertation.  Their dissertation was listed on their CV, but nothing else. I am not saying that publishing in academic journals is the only way to demonstrate academic work. You could for example be presenting at conferences, presenting at professional association workshops, writing for a blog or professional publication (basically translating academese to professionals). These job applicants had none of that, so they were demonstrating a lack of practice and continuous improvement in their field.  So they had earned their badge of honor by completing a doctoral program but there was no follow through.   For individuals like that I'd have a hard time calling them an economist, a biologist, a demographer, or a whatever.  I'd called them Doctor so-and-so, but they - in my mind - are not an embodiment of what it means to be a ___________ (fill in blank).


Example 2: Word ambiguity
When I was close to finishing my degree in Applied Linguistics I came across a podcast and a blog of someone who called himself a linguist. I was really happy to come across this podcast and blog because I could continue to learn about a topic of interest once I graduated (and also while I was in school), and this was exciting because back then there weren't really that many linguistics blogs or podcasts around.   My working definition of linguist a person who studies linguistics (where linguistics is the scientific study of language).  This is how I've always understood linguistics.  The person on the other end of this podcast was not a linguist in that sense.  He was a linguist in the dictionary sense of a person skilled in foreign languages.  Personally I'd call that a polyglot and not a linguist. Although, I don't think that it would have bothered me too much if this person called himself a linguist if he didn't really start to preach in his podcast about the best way to learn a language.  I find that at that moment he crossed the line into the domain of what I consider linguists: those who are either clinical linguists (for lack of a better term), and those who are teachers of language and take an inquisitive and critical approach to their teaching and either share what they've learned through their research (published or not). This individual calling himself a linguist was neither a teacher, nor a linguist (in the scientific meaning). Hence the more accurate term that I would use is polyglot not linguist.


Example 3: The practicing MA graduate
In many fields conducting an MA thesis is the only means to graduating from your Master's program.  Even if you don't conduct a thesis to graduate, but you've studied research methods, and continue to hone your skills of inquiry, and continue to read up on advances in the field, I feel like you have the right to call yourself a ________ (fill in relevant blank), if of course there isn't a regulatory board for your profession (nursing, medical, legal, accounting, and other profession of that type). There are many smart people out there who do a lot of work, and who diligently work on keeping their knowledge and skills updated.  Some of them even research and publish.  Through their continued efforts I think that they've demonstrated that they are serious enough about their profession to be included in that group that calls themselves a ___________ (fill in blank).


At the end of the day, for me, an academic identity isn't necessarily tied to a degree earned.  A degree earned on someone's CV might give you clues as to what their academic identity is, but it's not the only consideration.  I think that practice and application are key considerations when you're deciding where you are in the group, or you're not.  I think if a word has double meaning - as with example #2 - the thing to do is stick with the more accepted or widely used meaning, instead of something that isn't used.  I think it's the honest thing to do.


Your thoughts?

View Comments

MOOCs as admissions considerations


It's been a while since I've sat down to blog (with the exception of my brief postings last week).  I guess I've had my nose firmly planted in books (physical and digital) trying to get through the reading components of my dissertation proposal so I can sit down and write. I tend to find (for me anyway) that having a bit more of a complete picture in my head as to what I want to write about cuts down a a ton of edits down the road. Because of this I also haven't really engaged a lot with my learning community (MOOCs and LOOMs alike).

That said, a recent work encounter broke my blogging slumber and has pulled me from my dissertation a bit.  In my day job one of my roles is to answer questions about our department's program (what is applied linguistics, anyway? j/k 😆) and that includes questions about admissions. While we prefer applicants with a background in linguistics or related background  such as languages (such as French, Italian, Spanish, Arabic, Greek, whatever language and literature background) we do accept others who did their BA in something different.  Personally I think that the language is archaic and comes from a time when the mission and vision of our department was slightly different, but that's neither here no there.  My point is that when there are people interested in our program who come from a background other than languages (such as business, or computer science for example) the question always becomes  how can I better prepare for this program, and ensure I get admitted? Basically ensuring that the applicant shows some sort of connection with between their interest in our program and what they did, or want to do.

In the past couple of years MOOCs have come up!  Even though I've been steeped in MOOCs for the past six years I didn't really think others were.  Furthermore, it amazes me how much value others place in MOOCs, and MOOCs that they have taken. Personally, while I like taking xMOOCs (I just signed up for about 10 of them recently through edx and future learn, and I am trying to do one on Canvas on collaborative ICT...) I don't know if I would ever mention my exploits in the MOOC arena to others (except maybe through my blog, or through a group of close MOOC friends).  My rationale for not sharing my learning is this:  While I personally derive value from what I do in MOOCs (it expands my own horizons, even if I am just viewing some videos) I also know that assessments are a little forced in xMOOCs.  Simple MCEs or short-answer peer-graded assignments don't really point toward mastery of something.  In ye olde days of xMOOCs the certificates of participation were free; provided that you completed the MOOC in its original run.  Now xMOOCs require you to pay for a certificate of participation, and I personally don't see any value to that.  Even if you pay for a verified certificate where you have someone proctor you while taking MCEs, what does that really mean?  That you can take a test?

This all got me thinking about the potential use of MOOCs for application purposes.  I personally think that by taking (and completing) a MOOC it shows interest in the topic, so that's a positive for the applicant, but it doesn't necessarily show any mastery. So, while useful, it definitely has its limitations.  The certificates don't really mean much to me for my current work, and yes - I do hold on to the certs that received while they were still free (😉) but I don't see additional value to the ones that people get these days in exchange for cash.

What do you think? Is there a value to students doing MOOCs with the aim of getting into a specific part of higher ed?
View Comments
See Older Posts...