Club Admiralty

v7.0 - moving along, a point increase at a time

Multilitteratus Incognitus

Traversing the path of the doctoral degree

Mentor-Teacher-Hybrid Presence-course design...


This semester is turning out to be one that is quite busy.  It was a good idea to not teach a graduate this semester so I can focus on my dissertation proposal, however (like that irresistible desert at the end of the meal) various collaborative projects have come in to fill the "void" left in my schedule from not teaching (the one that is supposed to be going into dissertation prep), and these projects have me thinking.

First is the aspect of Hybrid Presence.  Suzan and I coined this term to describe something between Teaching Presence and Learner Presence for the most recent Networked learning conference.  We are currently working more on this topic for an upcoming book chapter.

Second is gamification.  A term that has come in and out of my list of curiosities that I want to play around more with.  I've done some work on this for school, and for professional organization presentations, but nothing big in terms of an article (in my ALECS proposal it was only part of the ingredients).

Finally, since I am not teaching next spring (how much do you want to bet that other papers will fill in the void, LOL), I've been thinking about the summer I usually teach in the summers.    I facilitated the transition from "Introduction to Instructional Design" to "Foundations in Instructional Design and Learning Technology" - a small word change, but the connotations of such a change were profound for the course.  Rebecca H and I have taught variations of the course, as well as variations of INSDSG684. For the past 4 years I've wanted to gamify the learning experience, which I have partly done through badging, although that seems to not have caught on that greatly.  As an opt-in experience it varies a lot. This leaves me pondering: is it wise to move from the gamification end of the spectrum to full-on gaming in an introductory course?  If yes, how do you do it?  The boardgame metaphor appeals to me, but there are other metaphors that do as well!

On another strand, there are students in the MEd program that I teach in that are close to graduation and that I've had in my class at some point or another.  Now that they are a little further in the program I'd like to invite them back, for credit, to be part of the introductory course.  But not as teaching assistants.  I think that's a waste of their time and money. Rather, I want them to be mentors who are developing what Suzan and I term a Hybrid Presence.  I'll be be around to mentor the mentors (while working on my own Hybrid Presence) but I want to tease out how that would work as a for-credit course.  Since I only really teach two courses per year (limitations of employment), my current puzzle to solve is this: I want to combine the transformgameation† of the introductory course with this mentorship model I want to develop. This way I am working on a gamefied design that's (maybe) more interesting, and it won't bore the mentees since will be part of something new.

What do you think of this idea?

† word I invented, transform + game = transformgameation, tell it to your friends, let's have it catch on.


Pondering assigning groupwork...

The summer semester is over!  Well, it's been over for several weeks now and the fall semester is in full swing, but I am not teaching this semester (focusing more on projects that have been on the back-burner for a while). Taking a break from teaching actually makes me think more about teaching in an odd way (I guess out of sight, but not out of mind).

One of the courses that I teach is an intro course to instructional design and learning technology (INSDSG 601, or just 601).  Since this is a course that introduces students not only to the discipline, but also to the program of study at my university I though that it would be a good idea to give students some foundations in group work since this is something that they will encounter both in the "real" (aka working) world, but also in subsequent courses in the program and they need to be able to work effectively with one another.

The way the course assignments work is that there is a big project that last the entire semester which is individual, and there are several (4) smaller projects that are team-based.  These are a jigsaw activity and it allows students to become experts in one smaller area and teach others about it.

The first time around (summer 2015) I had students switching teams throughout the semester.  The idea was to give students more choice as to their group projects and the groups would be self-forming that way. The feedback that I got was that this was tiring to the students. I think that forming/performing/adjourning 4 groups during the span of 13 weeks was tiring, and it also didn't give students the space to actually get to know people beyond the scope of the project (which would have been useful as peer review for their projects!)

This past summer, I changed things up a bit and I formed the groups myself (an idea I picked up from Rebecca H.). Luckily I seemed to have a balanced group of K-12, Higher Education, and Corporate students in the class which made group creating a little easier. Taken one of each, wherever possible, and create a group. This way groups needed to negotiate which topics they wanted to be undertake as a group which potentially limited choice of topics for individual students, but on the plus side they got to know their team-mates, and there were semester-long pods which could in theory support peer review throughout the semester.  I didn't require it for grading, so I wanted to see if groups just shared individual semester projects amongst each other for review.

This worked out OK.  I would say that 50% of the class loved their teams...and 50% either passively disliked (you know, the mild groan) or actively disliked their team-mates. Whereas in the first attempt (2015) people seemed tired of the process, this second try at teamwork made people either love or hate their team-mates.  Those who loved their team-mates seemed to coordinate future classes together, and those who hated their collaborations...well, I didn't hear much more about it from their weekly reflections.  Those who seemed to dislike groupwork also had things happen in their groups; some things which were just not avoidable, like "like happens!" type of things, like unexpected family or work things.

One of the things that came up in both positive and negative experiences relates to empathy. In some cases of teams that didn't work out well, I got the sense that people were thinking along the following lines "I get that xyz happened to  student_name but that's does not concern me much, I am here to learn abc and I've got my own problems to deal with, so too bad for them, but I need to be done with some project here".  I think that if students could empathize more with one another they wouldn't have such negative reactions to groupwork.  On the other spectrum, even in well functioning groups, I got the sense that there were some people who had more time than others (just 'cause), so they tended to overwhelm the rest of the group with their eager excitedness.  That's cool (I like eager people!  I relate to them :-) ), but  at the same time it can create this feeling among some group members that they aren't performing at the level they should. The group level performance is much higher than what the project requires and this can create feelings of not failing your team-mates.  I think this is an empathy issue too.

While, on the whole, I think if I were able to control for those (uncontrollable) life issues, I think creating groupwork-pods for the semester worked out better.  But I am still looking to tweak the group experience in the course.  How do we increase communication, understanding, and empathy?  Do I require groups to meet weekly and submit meeting minutes (to make sure that they met)? Do I undertake a role-play at the beginning of the semester in a live session to increase empathy? And, how can groups be leveraged to support their fellow team-mates who might be falling behind for reasons that exist both inside or outside of class?


Getting beyond rigor

The other day I got access to my summer course on Blackboard.  With just under 25 days left to go until the start of courses, it's time to look at my old syllabus (from last summer), see what sorts of innovations my colleague (Rebecca) has in her version of the course, and decide how to update my own course.  I had some ideas last summer, but since then the course has actually received an update by means of course title and course objectives, so I need to make sure that I am covering my bases.

Concurrently, in another thread, while I was commuting this past week I was listening to some of my saved items in Pocket, and I was reading (listening to) this article on Hybrid Pedagogy by Sean Michael Morris, Pete Rorabaugh and Jesse Stommel titled Beyond Rigor. This article brought me back to thinking more about academic rigor and what the heck it really means.  I think it's one of those subjects that will get a different answer depending on who you ask.  The authors write that:

institutions of higher education only recognize rigor when it mimics mastery of content, when it creates a hierarchy of expertise, when it maps clearly to pre-determined outcomes

I suspect that's partially one definition of what rigor is thought to be, however I've come across courses that I've personally found that they were lacking rigor but they met those specific requirements mentioned above.  Sometimes I've found that rigor has to do with the level of expected work that a learner does.  If we think of learning as exercise and school as a gym, the analogy of a rigorous workout is something that raises your heart rate, burns calories, and gives your muscles a work out. At the end of a rigorous workout you feel tired. Luckily for exercise folks that stuff is easily measured.  I know that I did something rigorous when I feel exhausted after the gym. However, when it comes to learning we don't have instrumentation that is as easy to use and assess.  So, what the heck is rigor in a college course? How can we define it? Is it a malleable concept or is it hard set?

Interestingly enough the authors approach rigor not from what the attributes of the content are (i.e. who much of it, and by whom), but rather from an environment aspect. Rigor emerges from the environment rather than being a predefined constant.  In order for rigor to emerge when the environment is engaging to the learner, when it provides a means to support critical inquiry, when it encourages curiosity when it is dynamic and embracing of unexpected outcomes, and finally when the environment is derivative.  This last one was defined as a "learning environment is attentive and alive, responsive not replicative."

The one constraint I have with this course (well, other than the course description ;-) ), is the textbook. The department uses the Systematic Design of Instruction by Dick, Carey, and Carey as their foundational book and model. Last summer I developed the course from scratch using DCC as the core organizing principle.  Now, while still important, after the update to the course title and description DCC must share the stage with other elements, so I am re-considering (again) what rigor looks like in this environment.  I am pondering, how I can rework an introductory course to be derivative and to give students a more control in shaping the curriculum (thinking rhizomatically here) beyond having to choose from some finite options...

At the moment, rigor for me is still one of those "I know it when I see it" things.  It would be interesting to discuss this a little further with others who are interested on the topic to see where we land on it.

On another note, and rigor aside, the two things I am keeping and/or expanding are mastery grading (you either pass or you need revision) - I am not going back to numerical grades for anything.  I would prefer that students focus on feedback rather than the numerical grade.  The other part I am keeping is digital badges.  They worked fine last summer, I just need to figure out how to make them better.


Grading Rubrics


The other day I came across this PhD Comics strip on grading rubrics. As a trained instructional designer (and having worked with instructional designers on and off since I started university as an undergraduate student) the concept of rubrics has really stuck with me.  That said,  I generally struggle with rubrics.

In theory they are brilliant - a way to objectively measure how well someone has done on whatever assessable assignment. On the other hand, they are not that great and they could be a means for discontent and discord in the classroom (the "why did you indicate that my mark is in category B when it's clearly, in my student mind, in category A?" argument). For this reason I try to create rubrics that are as detailed as I can make them.  That said, it seems that detailed rubrics (like detailed syllabi) are rarely read by students ;-)

Another issue arises with inherited courses. When I've inherited courses from other people that's also a source of an issue with rubrics.  It seems that their rubrics are less detailed and more subject to interpretation - which in my mind doesn't help the learner much - and it does little for consistency between faculty members who might teach the same course.  Here is an example (redacted to try to keep assignment and course somewhat anonymous. It is an intro course though):

Using this Rubric, I would say that two people (who don't know each other) teaching the same course can potentially be giving two different marks for the same assignment.  What's important here is the feedback given to the learner, so the mark may not matter as much in a mastery grading scenario, and the (good) feedback gives them a way to re-do and improve for a better mark if they want.

The more I design, and the more I teach, I am wondering if detailed rubrics are better as a way to on-board professors and instructors into departmental norms, and if broader rubrics are better for the "student view" and used with a more mastery-based approach to learning. :-/



Teaching, Grades, and the Impostor Syndrome


The other day I was reading a blog posted by Rebecca on marking and getting a sense of that impostor syndrome creeping in. I love reading posts like these because I still consider myself new to the teaching, even though I've been doing it for a couple of years now.  Some of the things that she describes are things that I have thought or experienced, and some are not.

In terms of an impostor syndrome, it hasn't come out for me with grading assignments.  In the past, when I have momentary panics or thoughts that impostor syndrome is setting in, it's usually around content-area knowledge!  Early on, when I started teaching, I wasn't even a doctoral student.  I was a practitioner and life-long learner, with a little research under my belt.  I knew enough, but I didn't consider myself the font of all knowledge - and that was scary.  What would learners think of me?  What if I was in a 'pop quiz' type of situation and the learners asked me some question and I didn't know the answer? Oh no! :-0

Luckily this only happened for a couple of semesters.  I quickly came to two realizations.  First, it's not possible for me to be the font of all knowledge on the subject I am teaching.  Researchers keep researching, things keep getting published, and it's not possible to keep up with everything in order to be completely up to date so that I could answer those unforeseen pop quizzes from students (which never came by the way).  I am not even 40 yet, so I don't expect to have the same amount of knowledge 'banked' as colleagues who have been active in the profession three times longer than I have.  It's just not a good metric by which to base your professional worth.

Another realization is that I shouldn't be the font of all knowledge.  Students can't just come knocking on my door whenever they have a question about some content area.  What's important is that we are all lifelong learners and that we exist in a network (how connectivist of me).  If we don't know something we can (and should be able to) find it through our network of humans and non-human appliances. As an instructor - a professor - I should have as one of my objectives to help them become self-sustaining, otherwise their degrees become not-as-valuable (some may say worthless) a few years after they graduate.

Once I got comfortable with these two propositions impostor syndrome went away for me.  In terms of grading assignments, I too am all about the feedback.  I dislike grades (The Chronicle had an article on grades the other day). I wish out courses worked on pass/needs improvement for grades as this would better align with how I design classes that I teach now.  As I was reading Rebecca post I reflected a bit on what it was like the first time I taught INSDSG 619 (now 684).

The course was a designed by a colleague to be an exemplar of how to design for online. What you read in the course on a week-to-week basis was also reflected in the design of this course.  I've written before about not feeling empowered to change things (other than changing readings to keep current).  One of the things I really disliked about that course were the rubrics for assignments.  Now, in theory rubrics are a good idea.  They describe to the learners what they need to do in order to pass the assignment.  However I found some rubrics so non-granular that basically everyone who put a little effort into it could get an "A".  There is nothing wrong with everyone getting an "A", however I noticed (over 3 years teaching that course) that the quality of projects would vary greatly, yet learners were still all getting an A or an A-.  That is because the rubric I inherited had only 3 levels, and the differentiators between the levels were so minute (in my mind) that a lower grade was really a result of a technicality (again, this in my view).

In any case, for the introductory course that I taught last summer, I decided to start from scratch and make all assignments pass/needs improvement. This way I can make an assessment as to whether something is passing (or not) and then focus more on giving feedback.  The main issue - when it comes to grades - is how do you differentiate a A, from a B, from a C?  The answer is imperfect: volume!  The more assignments you do that are of passing quality, the higher your course grade.  It's not something I like, but it works for now.  I guess I'll need to brainstorm more about this. The plus side is that I am not feeling impostory, so that's out of the way ;-)

Seeking the evidence


In my quest to catch up on Pocket - before the news becomes stale, I came across this post by cogdog on seeking the evidence behind digital badges.

The anatomy of the Open Badge include the possibility of including links to evidence for the skill that you are being badged for.    Of course, just because there is an associated metadata field available for people to use,  it doesn't mean that people actually use it!

I know that the evidence part of badges is something that is often touted as an improvement over grades in courses, or diplomas, because grades don't indicate what specific skills you've picked up, and this problem is a bit worse with diplomas and graduation certificates because you can't evenly compared one candidate to another (let's say in my case it would be comparing me to some other computer science major from another university - or heck even my own university).

Anatomy of badge, by ClassHack

So, in theory, badges are superior to these other symbols of attainment because they tie into assessable standards (that a viewer can see) and it ties into evidence.  And, again in theory, the layperson (aka the hiring manager) can read and assess a candidate's skills and background.  In practice though the evidence is lacking, and I am just as guilty of this having issued badges in two of the courses I teach. From my own perspective-from-practice I see at least two reasons for this lack of evidence:

1. Not all badges are skills based, and the evidence is not always easy to show.
I use badges in my courses are secondary indicators. Less about skills and knowledge, and more about attitudes and dispositions.  So, I have secret badges, sort of like the secret achievements in xbox games, that unlock when your perform specific tasks.  I let students know that there are secret badges, and that they relate to forum participation, but I don't give them the criteria so that they don't game the system.  They objective it to reward those who exhibit open behaviors and learning curiosity behaviors.  Once a badge is unlocked then I make an announcement and people can have a chance at earning it too, if they want.  In cases like these the badge description acts as the means of telling the reader what a person did to earn that badge (i.e. helped fellow classmates at least 3 times in one semester), but I don't attach evidence from specific forums. That seems like a ton of work for nothing (since looking at text from disconnected posts isn't meaningful to someone)

2. Good enough for class - but good enough for evidence?
Another concern I've had when thinking about attaching evidence to badges that I issue is the fit for purpose.  Some badges are known to my students (let's say the 'connectivism' badge where students in an intro course create some deliverable to train their fellow students on the principles of connectivism).  For my purposes an assignment might be good enough to earn a passing mark.  However, my fit for purpose is not someone else's fit for purpose.  Because of this I have not included links to evidence.  Furthermore, some of the evidence is either locked behind an LMS, or it's on someone's prezi account, or weebly site, or wikispaces page.  The individual student can delete these things at will, so my links to these resources also become null.  So, there is an issue of archivability.

One of the things that cogdog mentioned was that "being badged is a passive act".  I think that in many instances being badged is passive and that has certainly that's been my experience in a number of cases.  However I have seen exceptions to this. There have been a couple of MOOCs that, such as the original(ish) BlendKit, and OLDSMOOC where I had to apply in order to receive a badge.  This allowed me, as a participant and learner, to say that I am ready to be evaluated and the outcome would be a badge if I passed.

What do you think?  Is the evidence more hype than anything else?  Can it be done better? If so, how?

EDDE 806 post II - Of research questions and generalizability

Yesterday evening I attended my second formal EDDE 806 session (formal in the sense that I am doing blog posts for it, as opposed to just attending and being a fly on the wall).  In any case, the session was pretty interesting, and Viviane Vladimirsky, a fellow EdD student, on her work on her dissertation.

Just prior to Viviane's presentation, as we were going around introducing ourselves there were two interesting pieces of information shared (and reinforced).  First, when we're working on our dissertation when in doubt ask our committee members what they want to see addressed.  Asking people outside of your committee will just muddy the waters, because in the end, in order to graduate, you only need to satisfy your committee and no one else.  I think this is sage advice because if you ask 10 scholars to give you feedback they will all come back with different points of view (based on their own backgrounds, epistemologies, and biases).

The other piece of information (wisdom) shared was on the importance of research questions (very specific ones).  I gotta say - I am still not sold!  I get the importance of specific research questions in certain contexts, but this week I've been reading (again) about post-modernism in 804 and I guess I am rebelling a little against the notion that we have to absolutely have concrete research questions in order to research.  As I joked in the discussion forum, can't I just be the "data whisperer"?  Can I come in with the broad question (such as "what does the data tell us?"), and a grounded theory approach, and continue on with my research?  To be continued...

Anyway, Viviane's presentation.  Viviane is doing research in Sao Paulo Brazil.  Her project is based on Design Based Research principles and she is working on creating K-12 teacher professional development to improve teacher training using OER, and encourage the uptake of OER in the professional activities of K-12 teachers.  Do do this, she is looking at it from two theoretical frameworks, the Unified theory of acceptance and use of technology, and the Integrative Learning Design Framework (this looks like an instructional design model to me). She also chose DBR because DBR is pragmatic, grounded, adaptive, iterative, collaborative, and the designs can be modified based on emerging insights.  In a sense DBR reminds me a lot of agile instructional design.

When the limitations of this study were discussed the issue of generalizability came up.  Again, because of my post-modern frame of mind at the moment, I don't think generalizability is an issue.  Sure, you can't necessarily compare to a physicist who runs experiments and can come up with something that is generalizable (for the most part), but is that really an issue?  We, as humans, are complex beings and a lot of different factors go into who we are, and how we act.  Findings from one research may not be generalizable, but those findings, taken with the findings of other studies (in meta- studies) can bring us closer to understanding certain things that many be generalizable.  I know that we have to cover ourselves and state the obvious, that findings are not generalizable, but that seems like a given to me (and not something we should be apologetic about - not that Viviane was apologetic, but I've seen others be).

So that was it for the seminar of February 4, 2016.  Did you attend? What did you think?

Assessing the process or the product?

The other day I came across a post on ProfHacker written by Maha B. where she talked a bit about her own teaching experiences and whether one assesses the process of learning or the product of learning.  I was thinking about this question in light of my own experiences as a learner, as a designer, and as an instructor who now has had experiences in introductory courses, capstone courses, and intermediate courses.

Obviously there isn't a hard and fast rule about this.  In some courses, or parts of courses the final product matters enough so that it outweighs the grading component of the process.  My gut tells (and the educator) me that the process is really more important than the final product. However, my own acculturation into the field of instructional design snaps me back to SMART outcomes (you know, specific, measurable, accurate, realistic, and time-bound) wherein these goals are really about the product and not the process.  I suppose if you have the freedom to tweak the learning objectives of the course so that you can value the process more than the outcome then that is fine.  However, if you can't tweak the objectives you have a bit of an issue.

Another thing I was considering is the specific outcome of the class.  For example, in an introductory course I taught last summer I often leaned more toward process than overall quality of deliverable. This made sense since the learners were new to instructional design and, like artists, they needed to have several tries, attempts, and feedback attempts in order to become better at design.  The final product that they produced was pretty good, but it could be better given knowledge from subsequent classes. So, the final product was good for the amount of information and experience they had on hand.

On the other hand, this past fall I supervised the capstone project for my instructional design MEd program, along with friend and colleague Jeffrey K.  Even though this was a 3-credit course there wasn't really anything being taught.  It was more of an independent study where each learner worked on their own capstone project, and received feedback on each iteration submitted. While there is a deliberate process behind this capstone project, the final product is meant to be a showcase of their capabilities as instructional designers, given that they have completed at least 10 courses before they enter they sign-up for the capstone.  In cases like these, while process is important (feedback and improvement cycles), the final product is, in my mind, really what's being assessed.

That said, the case of the capstone is quite specific, and perhaps an outlier in this discussion.  In classes that I design I prefer to give more weight to the process and less weight on the perfection of the final product.  It's one of the reasons I have largely moved to pass/not pass grading in new courses I design.  Instead of having students feel like they've gotten a ton of points off for one thing or another (despite the passing grade), I think it's better for them to know that they passed the assignment and really look at the feedback that I gave them.  If in subsequent assignments they don't put that feedback to use, they may not pass subsequent assignments (and they can rework and resubmit), but what is important is that feedback-revision cycle.  I think that it really mirrors how instructional design works anyway :-)

What do you think?


So long 2015! What a "teaching" year!


Well, 2015 is done!  Grades are in, courses are complete, and things are in process for next year.  Next spring I am not teaching, so I am thinking about cool (and instructive) things I can implement for the course that I am scheduled to teach this summer (intro to instructional design).

I won't work too hard on next summer's course just yet, too many other things to consider first.  That said, I realized late in December that 2015 was an interesting teaching year for me.  I am usually only allowed to teach 2 courses per calendar year, but through some fluke - and departmental needs- I ended up teaching three courses, all of which were at different ends of the spectrum for learners.  One of the courses was for learners around the mid-point of their learner career, one at the beginning, and one at the end.  It was also an interesting year because I handed off the course that I've taught for a long time to a friend and colleague, and I picked two other courses up that I had not taught before, so course continuity was also in my mind.

So, in the spring I taught INSDSG 684 (The design and instruction of online courses). The class was rather small (for such an important course these days), but I think it's partly my reputation as being a demanding instructor that has caught up with me ;-).  The course was mostly what I inherited from Linda B., with a few changes to keep the readings current and up-to-date.  For the fall semester I passed off the course to my colleague Rebecca, and it was at that time that I was really needing to explain the course in general.  When I got the course from Linda I did not receive a design document with ideas, specs, and rationale for certain activities, so as someone who has done Quality Matters I was left thinking through that framework - looking for things in the course that satisfy the requirements for QM, but without knowing for certain if that was the intent.  I think that if I went back in time, I would have re-done the course and documented the heck out of it for instances like this when a hand-off is necessary.  Things that were good practice in computer science (document your code) are also good practice in instructional design - document your designs!  This process also gave me pause to consider departmental course continuity beyond the syllabus.  I wonder if other instructors out there think in terms of such depth for their course designs.

In the summer, just by chance, I ended up teaching INSDSG 601 (introduction to instructional design), which is actually the first course, and a prerequisite for all other instructional design courses.  This time around I ended up looking at the course with a fine-tooth comb.  I looked at what was on Blackboard from previous instructors, I looked at assignments, and at three different syllabi.  It was quite interesting to see three implementations of the course, two designed for online and one for in-person.  I wasn't particularly happy with the versions of the course I saw, especially considering that I had seen students down the stream (in 684) for a few years and I had assumed certain skills that some did not have when they arrived at the course.  I started thinking about what an intro course should have, and how it should setup learners for success down the road - if they continued to be learners in this program, or for lifelong learning, if this was their only course in instructional design.

The two biggest things that I didn't like about previous implementations were:

  1. They were using videos created by an instructor who was no longer teaching in the program.  While the 20-minute lectures were fine, I think that there is something 'off' when the person teaching the course is different from the person you listen to each week on lectures.  It's fine to collect various TeacherTube and YouTube videos in your course (from different people), but when there is one person who is regularly lecturing to your course (and introduces themselves as an instructor in one of them), I think there is some potential for confusion on the part of the learner.  At the very least, to me, it signals that the instructor doesn't even care enough to redo the videos.
  2. I think parts of the course were bolted on to an existing frame (instead of preconceiving the instructional design of the course).  This meant that research papers and mid-terms exams (where you were tested on procedural knowledge) found their way into what I (primarily) conceive of as a studio course. 
So, I ended up redesigning the course, introducing learners to technologies, theories, concepts, and methodologies that they would see later on.  There is an aspect of learner choice in the course - both in deliverable formats and in topics to choose from - but the idea is still that of a studio course.  I rather enjoyed working on this redesign since I actually got to document quite a few things.  It's not as documented as I ask learners to document in their designs, however I think that's the difference between real life and a demonstration in an academic exercise.  I think I am probably teaching the same course in summer 2016, so I have an opportunity to tweak things!

One of the things that really came up (over and over) is that learners cannot separate grades from performance.  Last year I wasn't sure who to do ✓, ✓+, and ✘ in Blackboard, so I ended up using 50 (for the ✘), 80 for a ✓, and 100 for a ✓+.  The ✓+ is really meant to be an above and beyond type of grade (if you get a lot of ✓+ that means that you might not be in the right course).  I can't recall how many students were concerned that they were only getting a B- in the course (because all they were seeing was the 80 in assignments).  This time around I think I've figured out how to do ✓, ✓+, and ✘, so I'll see if there is a change in perception from learners.  Should be interesting.

Finally, in the fall I ended up advising in the capstone seminar, seeing students at the other end of the spectrum.  I think the challenge in undertaking this course is that you have a certain expectation of what learners, those who are almost credentialed instructional designers, should be able to do and the discourse that they should be able to produce in their documents. When deliverables are shy of those expectations it's challenging at times to come to a common understanding because the learners are also frustrated by this experience as well - that of being in their final course but things not being as easy as they thought they might be.  The experience here I think showed me that all faculty in a program should take turns being the advisor or grader in a final exercise.  This way they all get to see where structural weak points are in a program so that they can be addressed in the curriculum.  When only one or two faculty undertake this they might just sound like broken records and ignored.

Finally, to wrap things up, I've seen comments from colleagues over the years about 'final exam season' being 'student drama' season; you know of the country song variety - spouse left me, took my dog and my truck, and left me with the moving bill - or something along those lines.  I think that even jokingly this is potentially problematic because true student drama cases are probably few and far between, and joking about it being the season for student drama (potentially) predisposes us from expecting the worse from students. So, I guess - my advice going into 2016 is this: expect more from your students, not less and definitely not drama :-)

Happy 2016!


Second life? Whatsdatnow?

Last week I was reading this article about abandoned campuses on Second Life - you know the virtual world that took the educational world by storm back in 2008(ish) and is now more or less synonymous with major flops and misdirects in educational technology.

For the past few days I've been looking like a madman through old backups of screenshots I had taken when I was more active in second life; to be able to showcase my tall, skiny, blasé, goth avatar with black wings (specifically sitting with his feet on a conference room table).  After looking through my computers, and through some backup hard drives, I ended up with nothing.  There probably is something there, but I didn't really want to invest too much time in finding that specific picture of Milo Vuckovic (the avatar). Luckily I had one photo of my Flickr account with his name tagged.  For a brief moment  I did entertain the thought of downloading the SL client and seeing if my university's Island is still there so I can re-create the pose...and then I laughed out loud ;-)

Milo is not afraid of Ninja
So, a little trip down memory lane.  Back in 2008, having finished my second Master's degree, I embarked on my third and fourth Master's degrees simultaneously.  I was looking at jobs as an educational technologists, but without a degree in instructional design people would not look at my resume.  Too bad.  Anyway, the instructional design program I joined also had a new director at the time who was hired partly because of her work with second life. In the introduction to instructional design course I ended up spending time exploring the world of second life (and wondering why we've made such an investment in it).

Back then the university system spent $25,000 to get this up and running for our five campuses. Specifically the Teotihuacan project at UMass Dartmouth was all the buzz, and I get a sense that our campus was pushing people to "be innovative" with second life, not to be bested by our sister campus. Specifically the UMass Dartmouth project focused on a recreation of the architectural monument, the Temple of Quetzalcoatl, as well as a full-size replica of the Palace of Quetzalpapalotl for students to study and explore (from that link).  This was an interesting idea, but I think that there is something lost when re-creating this in a virtual world.  I wonder if the temple, and the island, is up and running.

Even back then the island, and second life, was pretty dead.  As much as I was interested in trying out building things in SL my main hurdle was the closed, proprietary, nature of everything.  Unlike the world wide web, I could not setup my own server, run the SL equivalent of Apache, and get my own virtual world up and running.  With this pay to create model I was pretty much turned off.  Also, when you stopped playing the maintenance fees your stuff would go away (I did not see a way to download backups at the time).  Seemed like wasted effort at the time.  I think the fact that anyone can install their own minecraft server has made it much more popular as compared to SL.

Another problem I saw with SL at the time was this needless repetition of real world in SL. Designers were not thinking outside of the box in terms os what could be done, and in terms of heuristics.  So professors would have their students join a virtual class in Second Life only to sit down in a a virtual seat to be lectured at.  M'eh?  I can do that with WebEx and it will take up fewer computing resources than SL ;-)

It seems to me that people drank their own kool aid.  In several articles and interviews I saw "working with real world objects"  as a benefit of second life.  To me, working with real world objects means working with real world object (or at least 3D printed replicas).  A virtual world, even with virtual reality gear such as the oculus,  still means you are missing important tactile information. You are still to keyboard, mouse, and/or a joypad for navigation.  I think that virtual worlds do have a space, for example replicating actual living cities that don't exist anymore, such as Pompeii, but you still need to design for immersion. Something that, for me, SL lacked.

Where is everyone? (image from article)

What about you? Do you still think of second life?  What do you think?