Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

And just like that, it's fall! (or Autumn, same deal)

 Permalink


It's hard to believe, but the summer is in the rearview mirror.  Next week the fall semester begins and as I look back over the summer  I see some things I learned (or observed) in these coronatimes:

The FoMo is still strong!

I thought I had beaten back FoMo (fear of missing out) but I guess not :-).  This summer many conferences made the switch to online this summer due to the ongoing pandemic and their registration was free.  This made them accessible both in terms of place (online) and cost (free) for me.  So I registered.  I might have registered for far too many because there weren't enough hours to participate synchronously and attend everything I wanted to.  Luckily most sessions were recorded, so I was able to go back and review recordings of things I missed.  Between the Connected Learning Conference, IABL Conference, OLC Ideate, Bb World, HR.com's conference (and a few more that I can't remember at the moment), I got more Professional Development done this summer than any other summer.  By the end of this week, I'll also have caught up with all recordings.  The "AHA!!!" moment for me was this:  About 10-12 years ago when I was first starting out (as a starry-eyed designer) all this stuff would have been mindblowing.  I think online conferences for me are more about filling holes and making me think differently rather than building new knowledge in mind. And that's OK.  I discovered a lot of resources that I forwarded to friends and colleagues who would find them more useful than I did because they are at a different phase in their PD. Just like a garage sale (maybe a bad analogy) can yield nothing at all, it can yield a treasure you never thought existed, or it can yield something for your friends and colleagues. You never know what you will find until you start looking.

Quick startups are possible (darn it!)

This summer I was invited by a friend to co-facilitate a couple of weeks of a bootcamp course for teaching online (Virtual Learning Pedagogy). The learner demographic are educators in Nigeria (the course might have been open to other countries as well). The course was offered through Coderina. I think from the time we were all invited to the first week of the course we only had 2 weeks.  Last week was the last week of the course. I am not sure how much John slept these 6 weeks, but I think that the course was a success.  We talk about agile instructional design in our courses, and I think this was a good example of different teams working on different weeks, checking in with one another, and putting together a course while the course is being taught.  Could it be done better? Yes, everything can improve, but I am proud to have been part of such an agile multinational collaboration. I also got to meet a lot of new colleagues that I didn't know before. I think this was a good case study for agile ID. I can't wait to see what the next iteration of the course will look like :-)

Back into 601!

This summer I taught Intro to Instructional Design and Learning Technologies (it's got another title formally, but that's basically it). I had taken several semesters off from teaching in order to focus on my dissertation proposal (which needed a major rewrite - perhaps more on that after I graduate), and I've been looking forward to getting back into teaching. This summer I used the version of the course that Rebecca designed and uses, opting to not use what I had created a few summers back. Part of the reason for using her course was that she had baked into the course consideration for synchronous sessions.  I tend to be more asynchronous in my designs (so that people can have flexibility), but I wanted to be experimental this summer with sync-sessions.  Another reason I wanted to use someone else's design is to extend my thinking and collaborate with others.  I've got my own version of what an intro course can look like, but looking at another designer's design can add to your own toolkit and thinking,  Additionally, if there is one version of the course that many people contribute to the design of, I think differing student cohorts benefit both from the stability of the curriculum and from the process of collaborative design in the course. This way if cohort A takes the course taught by professor A, they won't get radically different core content than Cohort B taking the course with professor B. Your learning experience may differ, but core knowledge required down the road by other courses should be more or less similar. I really enjoyed teaching this summer. My students were awesome, and we had good exchanges both via synchronous and asynchronous means.  I also loved that I was able to invite friends and colleagues who work in ID to have some candid chats with our learning community. I think this was much more effective than reading articles about what an ID does.  If I could hop into a DeLorean and go back to June: This summer I only had 6 students.  Such a small number of students can make for a nice seminar-style course, but the course was designed with a class size of 10-15. The dynamics are definitely different with such a smaller cohort. I think that if I could go back in time I'd give students an option:  We could have asynchronous forums each week for discussing ideas and topics of the course, or we could forego (most of) the forums and meet synchronously each to accomplish similar means. I think a smaller number of students makes the forum feel a little like an empty playground.  It's got a lot of potential but it's only actualized when many kids go play.

Dissertation ahoy!

Finally, a little bit about this doctoral journey thing.  In May I successfully defended my proposal (yay) which allowed me to apply for IRB/REB clearance (yay!).  At the end of June, I got that clearance (yay!) so I could start reaching out to study participants.  It's hard to believe that a (somewhat) random MOOC I signed up for while waiting to hear back about my application to the EdD program ended up becoming my dissertation topic.  I may have bitten off more than I can chew in terms of story (data) collection but Narrative Inquiry is all about the story through someone's position in that metaphorical parade.  The parade keeps on moving, and so do participants in it, so I am OK with presenting a sliver of that experience (knowing that it's a sliver of it). It's not possible (for a dissertation anyway) to be a completionist when exploring an experience (which I guess pushes back on my FoMo mentioned above).  Hopefully I'll have a good draft of this thing by the end of the semester in December.

So...what was your summer like?


Image credit: "Zen stones" by rikpiks is licensed with CC BY-NC-ND 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/2.0/

 Comments

HyFlex is not what we need (for Fall 2020)

 Permalink
HyFlex (Hybrid Flexible) is a way of designing courses for (what I call) ultimate flexibility.  It takes both ends of the teaching spectrum, fully face-to-face, and fully online-asynchronous and it bridges the gap.  Back in the day, I learned about this model of course design by taking an OLC workshop with Brian himself, but you can learn more about the model in his free ebook.  I liked the model at the time (and I still do), because it gave more options to learners in the ways they wanted to participate in the course. They could come to class, they could participate online synchronously, and they could just be asynchronous, or a mix of any one of those depending on the week.

Quite a few people on twitter, including @karenraycosta, were pondering whether they don't like HyFlex (in general), or the implementations of HyFlex that we are seeing. Heck, It seems like HyFlex has become the white label flex model for universities because some of them are creating their own brands of flex!🙄. I wonder what marketing geniuses came up with that.  Anyway, colleagues and I have been trying to flex our learning for the last few years as a trial, with mixed results.  The main issue that comes out a lot is a critical mass of students, with a secondary issue of staffing.  In "pure" modality (full F2F or full-asynch) you need to have a critical mass of learners to be able to engage in constructivist learning.  If lectures are your thing and you expect people to sit down, shut up, and listen, then it works just fine.  However, for the rest of us who want to build learner connections and interactivity in the classroom we need a minimum amount of students, and we need to have a sense of how many there will be so we can plan activities.  An activity for 20 people won't necessarily scale down to 2 people.  The same thing is true in asynch, if most people are F2F, writing in the forums might feel like speaking to an empty room.

Things become more complicated if you want to create a sync session online and merge that with a F2F meeting.  The instructor becomes not only an instructor but a producer.  They need to manage the tech, ensure that everyone on-site has devices that they can beam the online folks in (zoom, adobe connect, etc.) to work in groups, for team presentations you gotta work wizardry to ensure that all people are well represented and the tech works. I've seen this type of producing happen in distance education classrooms of old where people connected 2 physical classrooms via P2P connections, and each site had a producer to manage the cameras that connected the students from one classroom to another, and the remote classroom had a tutor. In total there were 4 people to make this happen for a class of 40. HyFlex (the way it's implemented) expects one person to do this: the instructor.

While I think HyFlex is an interesting model to pursue, I think it's something to pursue for large class enrollments (think classes of 80 or more students), or multi-section team-taught courses (ENGL 101 for example that might have multiple sections taught by many people).  HyFlex isn't good for a "regular" class size class (regular defined as 12-20), because you need to design and plan for possibilities that might never occur.  This makes course creation more costly, and course maintenance an issue, which falls upon one person: the instructor.  Considering that the majority of courses are taught by adjuncts these days - who aren't paid well - this also becomes an issue of academic labor.  Think about it (and use my university as an example): 

  • One course is compensated as 10 hours of work per week (at around $5000, or $33/hour)
    • Assume 2 hours per week prep time (really bare minimum here, assuming all course design is complete and the instructor doesn't have to worry about that). That leaves 8 hours
    • 3 hours of that is "face time" each week.  That leaves 5 hours
    • 2 hours per week are office hours. That leaves 3 hours.
    • Assume 3 hours per week that you are spending engaging in things like forums, mentoring, reading learner journals, and responding back to them (an equal amount of time spent as on-campus). You are left with no paid hours to devote.
  • So what's left out?
    • What if you need to do more than 2 hours/week of student conferencing? Do you take a pay-cut? or do you say "first come first serve, sorry!" (not very student-friendly!)
    • Who grades and gives feedback for papers and exams?  Are they all automated?  That's not really good pedagogy
    • When does professional development take place to be able to use all the tech required for HyFlex?  Is this paid or not?
  • Parking on my campus costs $15 per day, so $225 per semester if you are only teaching one-day per week. If you are unlucky and teach 3 days per week (MWF) or five days per week (MTuWThF), then your parking costs are $675 and $1125 respectively.
    • This makes your compensation per course:
      • $4475 ($31/hour) - one-day teaching schedule
      • $4325 ($28/hour) - three-day teaching schedule
      • $3875 ($25/hour) - five-day teaching schedule
    • While these costs are incurred for people teaching on-campus anyway, when they are off-campus they are not working, however, with HyFlex they still have their online obligations.
  • There is a commuting cost associated with going to/from home. Those hours are not compensated or accounted for.

I think HyFlex can work, but not for everything.  Furthermore, for fall 2020 it puts the lives of faculty in danger because faculty would have to come in to teach on-campus.  The "flex" option seems to only be available to learners.  When Brian Beatty originally proposed the HyFlex model (from what I remember of my OLC workshop), the flex was a two-way street.  The faculty member could also say "well, this week we're online because of obligations I have" - but the flex proposed by colleges and universities doesn't seem to include this two-way flex.

Anyway- that's what I have to say about HyFlex.   How about you? 
 Comments

Hey! This isn't what I signed up for!

 Permalink

In my last blog post I was responding to the academy that isn't - or, perhaps, as some comments indicated, the academy that never actually was.  This past week I was at MIT's LINC conference.  It was a good opportunity to attend (since it was local), listen into some interesting panel discussions, and meet some folks from all over the world doing interesting things.  It was also a good opportunity to connect with folks (via twitter mostly for me) to think about academia (and the role it has) from a systems point of view.  I was rather happy to have been there to see Peter Senge speak at the end of LINC 2019 as he is a systems person, and someone whose work was foundational in my instructional design learning.

Now, I wasn't really planning a follow up to my last post.  I sort of wrote it in order to contribute my 2-cents to the discussion, as a response to @Harmonygritz (George), and also point people to it when they ask me if I want to pursue a tenure-track job.  However, the topic of faculty not being prepared  to do what their schools ask them to do came up on twitter during my #mitlinc2019 posts (via @ksbourgault) and oddly enough when I returned home and checked the subreddit r/professors the following post was made by one of the users:

I got into academia because I love creating and sharing knowledge. As I sit here working through my day, I can't help but wonder how I turned into a website administrator and customer service agent. Next year I've been told I'm going to have administration/management duties I never wanted and won't be very good at. I used to be the kind of person that didn't work a day in their life because I loved what I did. Now...well...getting through the days require medication. God dammit. Dammit. Dammit. Dammit.

So, I thought - what the heck?  Why not write about this?  After all, some people tend to give you a strangle glance when you point out the problem but offer no solutions.  So,...here is my tenative solution, as imperfect as it may be.

As I mentioned in my previous post, a tenured (or tenure-track) professor job has three main responsibilities: Teaching, Research, and Service. I would say that here, at the "job description" level there is a problem. Faculty are not prepared for all of these things during their studies.  Faculty are only prepared for one thing in their doctoral studies.  That one thing is Research.

Educating credible, ethical, and competent researchers is the distinguishing characteristic of a doctoral program and that is what makes as doctoral program different from a master's program.  Some people may argue with me that this is specific to a "PhD" whereas an "EdD" is more applied in nature - but I respectfully disagree; I've written this in another blog post year ago, and I am sticking with it. The crux of my argument was this:  Both PhDs and EdDs need to be able to critically consume literature, critically produce research literature, and critically apply research literature.  If you can't do that, then there is a problem.

What you'll notice is that those three verbs (consume, produce, apply) do not include the verb "to teach".  This is something that, in my opinion, could be remedied at the doctoral education level.  It's also something that could be remedied at the hiring level.  K-12 teachers (and other professionals) are expected to complete a certain amount of hours in CPD (continuous professional development) every year to maintain their teaching license.  Why not tenure-track faculty?  My fellow instructional designers bemoan the fact that faculty rarely reach out to them for training, and no one attends the workshops that they spend a lot of time on preparing.  Well, I can tell you why (and I've told my colleagues this too):  This type of CPD is not something that is valued at an institutional level.  No one is forcing faculty members to attend CPD sessions and apply what they learn in their teaching. It's not something that faculty get 'brownie points' on their annual reviews for, and when push comes to shove and they need something to clear off their places, CPD is it.   In my proposal I would say that doctoral students should get their "starter pack" in instructional design and teaching while they are doing their doctoral studies, and then they continue with CPD at the workplace (and have it be required).  Simple.

But, hold on, let me get a little more granular here, because I think it's needed. My proposal doesn't just stop at mandatory CPD.  I would argue that - depending on the needs of the organisation - the job duties of the "professor" position should be malleable and negotiable every so often. What do I mean by this?  Well, I'd say that we should start off with two "starter" JDs (job descriptions), and for a lack of better terms I'll call them:
  • Researching Professor (RP)
  • Teaching Professor (TP)
The RP would spend 25% of their time teaching, and 75% of the time applying and getting grants, and researching and publishing.  The TP would spend 75% of their time teaching and 25% of their time researching and publishing.  Both positions would be compensated the same, would get the same prestige, and the same benefits, but there would be a difference in how they were evaluated.  A researcher would be mostly evaluated on the quality and volume of their published work, they would need to attend teaching CPD (although I think less than a TP), and they would be evaluated on their teaching, but we'd go "light" on them since this would be a part time responsibility on them.   The TP on the other hand would be required to have higher amounts of teaching & learning CPD for the year, given their teaching-first responsibilities, and conversely would be evaluated annually with more weight going to the teaching than the research output.  This is important because at the moment (from my own little microcosm) I see a lot of emphasis placed on research and publishing in tenure and promotion cases.  Knowing what "track" you've applied to, and what track you are in is extremely important in my proposed model.

Another key element here is the negotiability of the position.  How frequently this happens is up to the organizational needs.  But, let's say that I am hired into a TP-tenure-track position and after a few years of courses I really want to focus a bit on my research for the next year or two. Maybe I want to be 50-50 (teaching/research), maybe I want to be 25/75 (thus being moved into the RP structure).  This should be negotiable between the faculty member and the chair - keeping in mind the needs of the department as well as the needs of the individual.  Likewise, if I am in an RP-type of position but my department suddenly has a ton new students and needs me to teach more courses, I could negotiate to go into a TP-type position for a year or two with this new cohort of learners, and thus be evaluated mostly on my teaching.  The key thing here with evaluations is that we don't privilege teaching over research (and vice versa) when conducting annual evaluations (or even tenure/promotion evaluations).

But...wait!  You are asking me "what happened to service???"  Well... service is kind of a tricky subject, isn't it?  I would treat the service category as a category that would push a faculty member to the "exceed expectations" category of the annual job evaluation, and because of this consideration, the service category would potentially want merit pay (for a job well done, not just having it on paper). One reason for this is that lots of things could fall under service: such as: Organizing a conference, undertaking student advising, sitting on someone's thesis/dissertation committee, doing some marketing for the program or recruiting new students, serving on the library committee, or on the technology advisory board, etc. Because there is such variety in terms of service postings it's hard to say what faculty should or should not be part of. However, CPD and some method of evaluation should be part of these service decisions.

For example, when I was an undergraduate, meetings with my major advisor were short, he looked at my transcript, signed off on courses I wanted with very little dialog, even when I tried to engage about my interests in computer science and future goals I'd basically hear crickets, and that was about it. Except, when my GPA dipped and he advised me to change major - instead of figuring out why this was the case, where my areas of deficit were, and how to improve (I guess he was worried about departmental averages than retaining students in the STEM field...).  In the meantime (last 15 years) I've met faculty "advisors" from all across campus that "advise" students without even knowing the degree requirements for their own programs. This is just plain wrong. So, taking this as a use-case as an example, if your service advising I'd say that those faculty members should attend CPD to get informed (and test on) departmental, college, and university policies; get trained on degree requirements; know the costs of attending college; and getting to know people in other departments that support students (such as the writing center, the ADA center, and so on).

There are some service duties that faculty shouldn't be in charge of.  Marketing and recruitment being one of them (I am sure there are others too).  Faculty just don't have the skill set, and it's not really efficacious to have them obtain it.  There are positions on campus of people who do this type of thing.  If faculty want to switch careers, that's fine, but I do have an issue with faculty keeping their position as faculty while half-assing something (or worse, passing it onto staff...). Faculty can be part of these processes (of course), as experts in their own discipline and experts of their department, but really marketing and recruitment should reside elsewhere, not with faculty.

In the end, here are the guidelines for my NuFaculty setup:
  • Get rid of Tenured and Non-Tenured distinctions.  Everyone now becomes Tenure track with two possible starting points:  An RP and a TP. Having tenured and non-tenured tracks leads to discrimination and classism IMO. Just as there can be a lot of different types of professionals, there can be (and should be) a lot of different equal types of faculty.
  • Faculty get evaluated not on on a one-size-fits-all model, but rather based on their designated positions and through consultation with their department chairs
  • Faculty CPD is a requirement, especially for TP.  CPD factors into annual and tenure evaluations.
  • Positions are flexible based on the needs of the individuals and the needs of the department, but they need to be setup before the evaluation period begins (can't change horses in mid-stream...)
  • Faculty positions are 12 month positions, not 9 as they currently are now.  Yes, they accrue vacation time that they can take. As a 12-month employee they can decide when their "summers" are when they don't teach, so their period of teaching responsibility can be flexible, and this is a win-win both for faculty and the school.
  • Service isn't required for faculty positions, but highly encouraged.  To undertake service the appropriate service type needs to be matched with pre-existing skills.  CPD is available for those skills if faculty want to grow into that area, but you can't practice until you show some competency.  Depending on the department needs service can substitute for 25% of the research or teaching component with prior approval and for defined periods of time.
  • The incentive to sign up for service is merit pay.

Your thoughts?




 Comments

What am I training for again?

 Permalink
From PhD Comics

It's been a while since I've had the bandwidth to think about something other than my dissertation proposal.  When I started this process four years ago (starting with matriculation in March 2014) I thought I'd be the first or second person in my cohort to be done (ha!), but like most marathoners I guess I am part of the pack looking at the fast folks ahead of me 😏.  Being part of the pack does have its benefits, such as getting an idea of how long the process takes (having friends in other cohorts also helps with this).  I thought, initially, when someone submitted their draft (be it proposal or final dissertation) that you would get feedback and signs of life from your various committees soonish, but seeing Lisa's journey (currently at 5 weeks and counting) gave me a reality check. Waiting isn't bad per se (we wait for a ton of things in life), but I think it is the expectation of things to come that makes this type waiting much more anxious for us doctoral student. Questions pop in your mind such as: Will they like what we submitted?  How much editing do they need me to do? Will they ask me to go back to the drawing board? How long will that take?  And if I have to defend this thing next week...well, do I have time to prepare? Do I remember everything I read in my review to the literature? eeek!

That said, I think I should rewind a bit.  What have I been up to?  Well, lots, and lots, and lots of reading and then funneling that into some sort of literature review. The past 4 (or 5?) weekends have been about process (and grit?); they have been about sitting down for hours and crafting what I learned into a coherent literature review. They have been about concentration (and probably some weight gain due to all the sitting...maybe some bad posture as well).  And, at last, this past weekend I finished the 139 page monster, put it all into one word file and emailed by advisor (hopefully she won't hate me because of the length 😜 ).  Without counting references, front-matter, and tables of contents, here is what the word count breaks down to:
  • Chapter 1: Introduction  ≅ 3,800 words
  • Chapter 2: Literature Review  ≅ 16,600 words
  • Chapter 3: Methods ≅ 6,700 words
Assuming that the average academic article is around 8,000 words (with references), I've written 4.5 academic articles, and this isn't even the full dissertation!

Now that the draft is submitted I have some free time (maybe 3-4 weeks if other cohort-mates reports are any indication of average length of waiting) to work on a research project that's been on the back burner and that's collaborative. In this project, in order to make it  to the appropriate word length the operative word is cut. This is a little challenging because when it comes to cutting there aren't really that many options.  Do you cut your methods?  Then reviewers will call you out on incompleteness of methods (and you might actually get penalized for it!).  Do you cut your findings?  Well, for a qualitative research paper without some qualitative data (which takes up space) you could be told that there isn't enough data (or they could say that you are making things up).  Do you cut the literature review?  Well, this seems like the most likely place to make cuts, but how is your reading audience assured that you did your due diligence? Hmmmm... dilemma...dilemma...dilemma.

This pondering lead me down another path: a recent (recentish?) tweet by Maha Bali, a critique of doctoral programs.  The gist of it was that PhD programs don't really prepare you for a lot of things that are expected in academia. The traditional pillars of faculty in academia are research publishing (usually of the academic article variety), service, and teaching; however the critique was that doctoral programs don't really prepare you for these things. I think this is is a much larger discussion which first needs analysis of what faculty actually do and what they are asked to do. Maybe this is an opportunity to examine what faculty do and their relation to other roles at the institution, but for now I want to focus on one part of it: the research and publishing.

I consider myself lucky to have had opportunities to research and publish prior to pursuing my EdD, and to do this both alone and in collaboration with others (as an aside, I find collaboration more satisfying as it satisfied both work and social aspects of life). Working on the doctoral degree affords me the opportunity for some directed study to fill in potential areas that I was missing, and to see things from different frames of view; for instance I have a finer understanding of learning in other fields such as military and health-care (just to pick on a couple) because of my cohort-mates.

However the dissertation process, and the reason for this process, seems quite arcane to me. I understand, from a cognitive perspective, that the dissertation is meant to showcase your skills as a researcher; and those with more romantic dispositions among us might also say that it contributes to the overall level of knowledge in our field. But if you are one of those romantics let me ask you this: when was the last time you cited a dissertation in your research?  And, just in case you are a smarty-pants and you have cited one dissertation, how often do you check out dissertation abstracts for your literature reviews? I digress though... Back on point...

It seems to me that as an academic (well, if I chose to go the tenure track once I earn my EdD) I need to contribute to the field by writing research articles, field notes, book chapters, reports, and maybe even a whole book; and I also need to provide peer reviews to fellow authors.  With the exception of book writing (which every academic does not do), the vast majority of writing is between 3,000 and 9,000 words.  A dissertation  is considerably longer. This makes me wonder (again) whether the purpose of the dissertation is one of endurance (i.e. if you can do this, you can do anything!) or of holding us up to romantic, inappropriate, or irrational standards, as in "once you graduate you are expected to write books". As an aside, this may have been the case when there were fewer scholars around, but these days there aren't enough positions open in the traditional tenure-track faculty profession, so the Alt-Ac isn't even addressed or acknowledged...but again, I digress.

The instructional designer in me has pondered the purpose of the dissertation (even before I applied to doctoral programs). If we've already replaced the once prevalent Masters Thesis with other means of assessment (or at least made the MA Thesis as one of a few options), why can't we do the same with the Doctoral Dissertation, which - if we're honest - just another form of assessment.  I should say that my own point of reference here are what are called 'taught' PhDs where there is required coursework before you are allowed to be a doctoral candidate, and not the kind you might find in Europe where you are apprenticed into the discipline by applying as an apprentice (basically) and just work on your dissertation upon completed of a masters program.

So my three questions out there for you:

  1. Do the traditional pillars of academia still hold up or should be re-conceptualized? What might they be? and how do they work collaboratively with other parts of the academy?
  2. Based on these current pillars where does doctoral education fall short (name your field as fields will most likely vary)
  3. Keeping the dissertation in mind: what would you replace it with? What are the underlying assumptions for your model?


Discussion welcomed (if you blog, feel free to post link)









 Comments

Instructional Designers, and Research

 Permalink
Yet another post that started as a comment on something that Paul Prinsloo posted on facebook (I guess I should be blaming facebook and Paul for getting me to think of things other than my dissertation :p hahaha).

Anyway,  Paul posted an IHE story about a research study which indicates that instructional designers (IDers) think that they would benefit from conducting research in their field (teaching and learning), but they don't necessarily have the tools to do this.  This got me thinking, and it made me ponder a bit about the demographics of IDers in this research. These IDers were  in higher education.  I do wonder if IDers in corporate settings don't value research as much.

When I was a student and studying for my MEd in instructional design (about 10 years ago), I was interested in the research aspects and the Whys of the theories I was learning. I guess this is why further education in the field of teaching and learning was appealing to me, and why I am ultimately pursuing a doctorate. I digress though - my attitude (inquisitiveness?) stood is in contrast with fellow classmates who were ambivalent or even annoyed that we spent so much time on 'theory'.  They felt that they should be graduating with more 'practical skills' in the wizbang tools of the day.  We had experience using some of these tools - like Captivate, Articulate, Presenter, various LMSs, and so on, but obviously not the 10,000 hours required to master it†. Even though I loved some classmates (and for those with who are reading this, it's not a criticism of you! :-) ), I couldn't help but roll my eyes at them when such sentiments came up during out-of-class meetups where we were imbibing our favorite (hot or cold) beverages.  Even back then I tried to make them see the light.  Tools are fine, but you don't go to graduate school to learn tools - you go to learn methods that can be applied broadly, and to be apprenticed into a critical practice.  As someone who came from IT before adding to my knowledge with ID,  I knew that tools come and go, and to have a degree focus mostly on tools is a waste of money (and not doing good to students....hmmmm...educational fast food!). I know that my classmates weren't alone in their thinking, having responded to a similar story posted on LinkedIn this past summer.

My program had NO research courses (what I learned from research was on my own, and through mentorship of professors in my other masters programs). Things are changing in my former program, but there are programs out there, such as Athabasca University's MEd, which do work better for those who want a research option.

Anyway, I occasionally teach Introduction to Instructional Design for graduate students and I see both theory-averse students (like some former classmates), and people who are keen to know more and go deeper. I think as a profession we (those of us who teach, or run programs in ID) need to do a better job at helping our students become professionals that continually expand their own (and their peer's) knowledge through conscious attempts at learning, and research skills are part of that.  There should be opportunities to learn tools, for the more immediate need of getting a job in the field, but the long term goal should be setting up lifelong learners and researchers in the field.  Even if you are a researcher with a little-r you should be able to have the tools and skills to do this to improve your practice.

As an aside, I think that professional preparation programs are just one side of the equation.  The other side of the equation. The other side is employment and employers, and the expectations that those organization have of instructional design.  This is equally important in helping IDers help the organization. My conception of working with faculty members as an IDer was that we'd have a partnership and we'd jointly work out what was best based on what we had (technology, expertise, faculty time) so that we could come up with course designs that would be good for their students. The reality is that an IDer's job, when I did this on a daily basis, was much more tool focused (argh!).  Faculty would come to us with specific ideas of what they wanted to do and they were looking for tool recommendations and implementation help - but we never really had those fundamental discussions about whether the approach was worth pursuing anyway. We were the technology implementers and troubleshooters - and on occasion we'd be able to "reach" someone and we'd develop those relationships that allowed us to engage in those deeper discussions. When the organization sees the IDer role as yet another IT role, it's hard to make a bigger impact.

On the corporate side, a few of my past students who work(ed) in corporate environments have told me that theory is fine, but in academia "we just don't know what it's like in corporate" and they would have liked less theory, more hands-on for dealing with corporate circumstances. It's clear to me that even in corporate settings the organizational beliefs about what your job as an IDer is impacts what you are allowed to do (and hence how much YOU impact your company). Over drinks, one of my friends recently quipped (works in corporate ID, but formerly on higher education) that the difference between a credentialed (MEd) IDer and one that is not credentialed (someone who just fell into the role), is that the credentialed ID sees what's happening (shoverware) and is saddened by it. The non-credentialed person thinks it's the best thing since sliced bread‡. Perhaps this is an over-generalization, but it was definitely food for thought.

At the end of the day I'd like to see IDers more engaged in education research. I see it really as part of a professional that wants to grow and be better at what they do, but educational programs that prepared IDers need to help enable this, and organizations that employ them need to see then as an asset similar to librarians where they expect research to be part of the course to be an IDer.

Your thoughts?


MARGINALIA:
† This is obviously a reference to Gladwell's work, and the 10,000 hours of deliberate practice.  It's one of those myths (or perhaps something that needs a more nuanced understanding). It's not a magic bullet, but I used it here for effect.
‡ Grossly paraphrasing, of course
 Comments

Are MOOCs really that useful on a resume?

 Permalink

I came across an article on Campus Technology last week titled 7 Tips for Listing MOOCs on Your Résumé, and it was citing a CEO of an employer/employee matchmaking firm.  One piece of advice says to create a new section for MOOCs taken to list them there. This is not all that controversial since I do the same.  Not on my resume, but rather on my extended CV (which I don't share anyone), and it serves more a purpose of self-documentation than anything else.

The first part that got me thinking was the piece of advice listed that says "only list MOOCs that you have completed".  Their rationale is as follows:

"Listing a MOOC is only an advantage if you've actually completed the course," Mustafa noted. "Only about 10 percent of students complete MOOCs, so your completed courses show your potential employer that you follow through with your commitments. You should also be prepared to talk about what you learned from the MOOC — in an interview — and how it has helped you improve."  

This bothered me a little bit.  In my aforementioned CV I list every MOOC I signed up for(†) and "completed" in some way shape or form. However, I define what it means to have "completed" a MOOC.  I guess this pushback on my part stems from me having started my MOOC learning with cMOOCs where there (usually) isn't a quiz or some other deliverable that is graded by a party other than the learner. When I signed up for specific xMOOCs I signed up for a variety of reasons, including interest in the topic, the instructional form, the design form, the assessment forms, and so on. I've learned something from each MOOC, but I don't meet the criterion of "completed" if I am going by the rubrics set forth by the designers of those xMOOCs.  I actually don't care what those designers set as the completion standards for their designed MOOCs because a certificate of completion carries little currency anywhere. Simple time-based economics dictate that my time shouldn't be spent doing activities that leading to a certificate that carries no value, if I don't see value in those assessments or activities either. Taking a designer's or professor's path through the course is only valuable when there is a valuable carrot at the end of the path. Otherwise, it's perfectly fine to be a free-range learner.

Another thing that made me ponder a bit is the linking to badges and showcasing your work.  Generally speaking, in the US at least, résumés are a brief window into who you are as a potential candidate.  What you're told to include in a resume is a brief snapshot of your relevant education, experience, and skills for the job you are applying for.  The general advice I hear (which I think is stupid) is to keep to to 1 page.  I ignore this and go for 1 sheet of paper (two pages if printed both sides).  Even that is constraining if you have been in the workforce for more than 5 years. The cover letter expounds on the résumé, but that too is brief (1 page single spaced). So, a candidate doesn't really have a ton of space to showcase their work, and external linkages (to portfolios and badges) aren't really encouraged. At best a candidate can whet the hiring committee's palate to get you in for an interview. This is why I find this advice a little odd.

Your thoughts on MOOCs on résumé?


NOTES:
† This includes cMOOC, xMOOC, pMOOC, iMOOC, uMOOC, etcMOOC...
 Comments

Course beta testing...

 Permalink

This past weekend a story came across my slashdot feed titled Software Goes Through Beta Testing. Should Online College Courses? I don't often see educational news on slashdot so it piqued my interest. Slashdot links to an EdSurge article where Coursera courses are described as going through beta testing by volunteers (unpaid labor...)

The beta tests cover things such as:

... catching mistakes in quizzes and pointing out befuddling bits of video lectures, which can then be clarified before professors release the course to students.

Fair enough, these are things that we tend to catch in developing our own (traditional) online courses as well, and that we fix or update in continuous offering cycles.   The immediate comparison, quite explicitly, in this edsurge article is the comparison of xMOOCs to traditional online courses.  The article mentions rubrics like Quality Matters and SUNY's open access OSCQR ("oscar") rubric for online 'quality'. One SUNY college is reportedly paying external people $150 per course for such reviews of their online courses, and the overall question seems to be: how do we get people to do this beta test their online courses?

This article did have me getting a bit of a Janeway facepalm, when I read it (and when I read associated comments). The first reason I had a negative reaction to this article was that it assumes that such checks don't happen.   At the instructional design level there are (well, there are supposed to be) checks and balances for this type of testing. If an instructional designer is helping you design your course, you should be getting critical feedback as a faculty member on this course.  In academic departments where only designers do the design and development (in consultation with the faculty member as the expert) then the entire process is run by IDs who should see to this testing and control. Even when faculty work on their own (without instructional designers), which happens to often be the case in face-to-face courses, there are checks and balances there.  There are touch-points throughout the semester and at the end where you get feedback from your students and you can update materials and the course as needed. So, I don't buy this notion that courses aren't 'tested'.†

Furthermore, a senior instructional designer at SUNY is cited as saying that one of the challenges "has been figuring out incentives for professors or instructional designers to conduct the quality checks," but at the same time is quoted as saying “on most campuses, instructional designers have their hands full and don’t have time to review the courses before they go live.”  You can't say (insinuate) that you are trying to coax someone to do a specific task, and then say that these individuals don't have enough time on their hands to do the task you are trying to coax them to do. When will they accomplish it?  Maybe the solution is to hire more instructional designers? Maybe look at the tenure and promotion processes for your institutions and see what can be done there to encourage better review/testing/development cycles for faculty who teach. Maybe hire designers who are also subject matter experts to work with those departments.‡

Another problem I have with this analogy on beta testing is that taught courses (not self-paced courses, which is what xMOOCs have become) have the benefit of a faculty member actually teaching the course, not just creating course packet material. Even multimodal course materials such as videos, podcasts, and animations, are in the end, a self-paced course packet if there isn't an actual person there tutoring or helping to guide you through that journey.   When you have an actual human being teaching/instructing/facilitating/mentoring the course and the students in the course there is a certain degree of flexibility.  You do want to test somewhat, but there is a lot of just-in-time fixes (or hot-fixes) as issues crop up.  In a self-paced course you do want to test the heck out of the course to make sure that self-paced learners aren't stuck (especially when there is no other help!), but in a taught course, extensive testing is almost a waste of limited resources.  The reason for this is that live courses (unlike self-paced courses and xMOOCs) aren meant to be kept up to date and to evolve as new knowledge comes into the field (I deal mostly with graduate online courses),  Hence spending a lot of time and money testing courses that will have some component of the course change within the next 12-18 months is not a wise way to use a finite set of sources.

At the end of the day, I think it's important to critically query our underlying assumptions.  When MOOCs were the new and shiny thing they were often (and wrongly) compared with traditional courses - they are not, and they don't have the same functional requirements.  Now that MOOCs are 'innovating' in other areas, we want to make sure that these innovations are found elsewhere as well, but we don't see a stop to query if the functional requirements and the environment are the same.   Maybe for a 100 level (intro course) that doesn't change often, and that is taken by several hundred students per year (if not per semester) you DO spend the time to exhaustively test and redesign (and maybe those beta testers get 3-credits of their college studies for free!), but for some courses that have the potential change often and have fewer students, this is overkill.  At the end, for me, it comes down to local knowledge, and prioritizing of limited resources.  Instructional Designers are a key element to this and it's important that organizations utilize their skills effectively for the improvement of the organization as a whole.

Your thoughts?




NOTES:
† Yes, OK, there are faculty out there have have taught the same thing for the past 10 years without any change, even the same typos in their lecture notes! I hope that these folks are the exception in academia and not the norm.

‡ The comparison here is to the librarian world where you have generalist librarians, and librarians who also have subject matter expertise in the discipline that they are librarians in. Why not do this for instructional designers?
 Comments

Pondering assigning groupwork...

 Permalink
The summer semester is over!  Well, it's been over for several weeks now and the fall semester is in full swing, but I am not teaching this semester (focusing more on projects that have been on the back-burner for a while). Taking a break from teaching actually makes me think more about teaching in an odd way (I guess out of sight, but not out of mind).

One of the courses that I teach is an intro course to instructional design and learning technology (INSDSG 601, or just 601).  Since this is a course that introduces students not only to the discipline, but also to the program of study at my university I though that it would be a good idea to give students some foundations in group work since this is something that they will encounter both in the "real" (aka working) world, but also in subsequent courses in the program and they need to be able to work effectively with one another.

The way the course assignments work is that there is a big project that last the entire semester which is individual, and there are several (4) smaller projects that are team-based.  These are a jigsaw activity and it allows students to become experts in one smaller area and teach others about it.

The first time around (summer 2015) I had students switching teams throughout the semester.  The idea was to give students more choice as to their group projects and the groups would be self-forming that way. The feedback that I got was that this was tiring to the students. I think that forming/performing/adjourning 4 groups during the span of 13 weeks was tiring, and it also didn't give students the space to actually get to know people beyond the scope of the project (which would have been useful as peer review for their projects!)

This past summer, I changed things up a bit and I formed the groups myself (an idea I picked up from Rebecca H.). Luckily I seemed to have a balanced group of K-12, Higher Education, and Corporate students in the class which made group creating a little easier. Taken one of each, wherever possible, and create a group. This way groups needed to negotiate which topics they wanted to be undertake as a group which potentially limited choice of topics for individual students, but on the plus side they got to know their team-mates, and there were semester-long pods which could in theory support peer review throughout the semester.  I didn't require it for grading, so I wanted to see if groups just shared individual semester projects amongst each other for review.

This worked out OK.  I would say that 50% of the class loved their teams...and 50% either passively disliked (you know, the mild groan) or actively disliked their team-mates. Whereas in the first attempt (2015) people seemed tired of the process, this second try at teamwork made people either love or hate their team-mates.  Those who loved their team-mates seemed to coordinate future classes together, and those who hated their collaborations...well, I didn't hear much more about it from their weekly reflections.  Those who seemed to dislike groupwork also had things happen in their groups; some things which were just not avoidable, like "like happens!" type of things, like unexpected family or work things.

One of the things that came up in both positive and negative experiences relates to empathy. In some cases of teams that didn't work out well, I got the sense that people were thinking along the following lines "I get that xyz happened to  student_name but that's does not concern me much, I am here to learn abc and I've got my own problems to deal with, so too bad for them, but I need to be done with some project here".  I think that if students could empathize more with one another they wouldn't have such negative reactions to groupwork.  On the other spectrum, even in well functioning groups, I got the sense that there were some people who had more time than others (just 'cause), so they tended to overwhelm the rest of the group with their eager excitedness.  That's cool (I like eager people!  I relate to them :-) ), but  at the same time it can create this feeling among some group members that they aren't performing at the level they should. The group level performance is much higher than what the project requires and this can create feelings of not failing your team-mates.  I think this is an empathy issue too.

While, on the whole, I think if I were able to control for those (uncontrollable) life issues, I think creating groupwork-pods for the semester worked out better.  But I am still looking to tweak the group experience in the course.  How do we increase communication, understanding, and empathy?  Do I require groups to meet weekly and submit meeting minutes (to make sure that they met)? Do I undertake a role-play at the beginning of the semester in a live session to increase empathy? And, how can groups be leveraged to support their fellow team-mates who might be falling behind for reasons that exist both inside or outside of class?


Thoughts?
 Comments

Instructional whatnow?

 Permalink
A number of threads converged last week for me, and all of the threads exist in a continuum.  The first thread was one that began in the class that I am teaching this summer, INSDSG 601: Foundations of Instructional Design & Learning Technology. One of the things that we circle back to as a class (every couple of weeks) are the notions of instructor and designer.  Where does one end and the other begin in this process?  It's a good question, and like many questions, the answer is "it depends".  The metaphor that I use is the one that calls back to two sides of the same coin.  In order for instruction to ultimately be successful you need both sides to work together.  An excellent design will fail in the hands of a bad instructor, and a bad design will severely hold back a good instructor (assuming that there is an instructor and it's not self-paced learning). There is the other side too: as instructional design students we were told that we would be working with SMEs (subject matter experts) to develop training, but how one works with SMEs is not clarified.  A good friend of mine, working in corporate ID, told me recently that communication with a SME is through an intermediary acting as a firewall and it's hard to get the information necessary to work on good instructional designs (now there is some organizational disfunction!).  The key take-away here is that you can't really separate out these roles. Both need to be informed from one another, and communication is key to successful training interventions.

In another thread, I was chatting with Rebecca (at some point or another this summer) about assessments and grading in the classes that we teach.  Another layer to this design and instruction challenge was added. You can have a really nice design, with lots of learner feedback and continuous assessment, but the situation might be untenable.  Take for example the case of an adjunct instructor (like me or Rebecca).  At our institution we are paid for 10 hours of effort per week for a specific course (each course counts as 25% FTE, and assuming a 40 hour workweek, each course is about 10 hours of work). These 10 hours include design maintenance work, synchronous sessions (if you have any), discussion forums, and assessment & feedback.  The design of your course might be awesome, but it might require more time on the part of the instructor than the organization has budgeted for.

So the question is how does good design sync up with organizational norms and constraints?  Organizational norms are something we've talked about in the class as well. Instructional design does not exist in a vacuum.   For the course that I teach in the summer I made it a little more "efficient" by using a ✓/✘/Ø grading for all assignments (submitted and passing; submitted and not passing-can revise; nothing submitted) which has addressed the issue of haggling for points to a large degree. This still leaves 43 items per student to be graded (and some level of feedback) to be given to the student.

I know that I am still spending more than 10 hours per week on the course, so the question - from a design perspective - is this: What is the most efficacious way of giving learners feedback on their projects and other aspects of the course while still staying within organizational constraints, and while adhering to sound (and researched) practices of pedagogy? In other words, what design options give you the biggest bang for the buck when it comes to teaching presence and learner outcomes?  Given that I've been more than happy to spend the extra time each week on the class, this is not a "problem" I need to solve for myself right now, but it is a design challenge for other colleagues!

The final thread in this came from twitter, when (out of the blue?) there was a twitter burst discussion on instructional design when Maha wrote:

@KateMfD how do u design a priori for someone you have not met??? Duh
@KateMfD to this day, I don't understand how Instructional Design begins w "needs analysis" before we ever meet the students!

JR added to the discussion by tweeting:
@Bali_Maha @koutropoulos @KateMfD but in a similar way, how do we know what courses we are going to teach prior to meeting Ss on day 1?
@Bali_Maha @koutropoulos @KateMfD not always a great starting point, but often attempting to benefit the organization, learner comes 2nd

I've been thinking about this and I've been trying to come up with a metaphor that makes sense. The metaphor that came to mind comes from the world of clothing and it's the dichotomy of Tailored versus Mass Produced clothing.  The textbook that we use in my program is the Systematic Design of Instruction, by Dick, Carey, and Carey, using the Dick & Carey model.  The textbook seems to indicate that as designers we have a ton of time to conduct a needs analysis (is the training needed), and a learner analysis (who are the learners), and a context analysis (where learning will take place), and to design a breakdown of what exactly needs to be learned.  And, sure, if we were instructional designers for the rich and famous, on retainer, we'd know a lot of this stuff ahead of time, and if those rich folks wanted to learn to paint, or water ski, or whatever, we'd have the luxury of knowing our learners, environment, constraints, and needs, and we'd be able to do something about it (we'd also be paid the big bucks!). This is what I call the tailored model - we have the luxury of taking all the measurements we need, and the client is willing to wait for the product.

The environment we work in, however, is the mass produced environment. In our day to day work as instructional designers we do our due diligence and try to do some needs analysis, but we also work from educated guesses of who our learners might be.  This is something that we've discussed (either on air or off air) at campus technology and AAEEBL this week with different colleagues.  How does one decide what programs to offer?  What courses fit into those programs?  What are the requirements for the program, and how each course's requirements fit into that puzzle?  Who are the learners who come into those courses?  The answer to that last question is an educated guess.  You might design a program, or a course, or a set of courses with a specific learner group in mind, however that persona is in-fact an educated guess.

Hence, using assumptions to start the process for that which is mass produced and we change it (or adapt it) on the fly as we get to learn who the learners are in our classroom. There are constraints in place to make sure that  the variation is "manageable" - and for a college program (at the graduate level anyway) that constraint is admissions.  By managing the admissions process faculty and departments know who is coming into their classes, and they can be prepared for that adaptation.  Further adaptation happens in class.  It's not complete adaptation since there are constraints, but adaptation exists (or, I argue, should exit). This way we're taking something that is mass produced, and tailoring it to the needs of the individual (to some extent anyway).  This is where design and instruction meet again - two sides of the same coin.


Thoughts?

 Comments

#DigPed PEI with Amy Collier

 Permalink
I am not sure why my Surface Pro 3 camera decided to hyper correct the lighting in my home office, but it seems that the only way for me to be properly lit was to look at my secondary monitor, which gives the appearance of sidetalking...  Oh well.  It was a good session nevertheless :)


 Comments