Club Admiralty

v7.2 - moving along, a point increase at a time

Multilitteratus Incognitus

Pondering what to learn next 🤔

Graduate Students as OSCQR reviewers

OSCQR Digital Badge

In the beforetimes (summer 2019), I had access to some graduate assistant hours and I needed to find a project for them.  Since this group of graduate assistants was destined to become educators, I thought it would be an interesting idea to train them on the OSCQR rubric and have them be " reviewer 1" and "reviewer 2" on a few course reviews that I wanted to undertake. I took on the role of the instructional designer in this exercise (reviewer 3).  Now, I know that the faculty member who is teaching these courses also needs to be part of the conversation, but more on that later...

My original goal for this exercise, beyond the actual review, was to conduct a collaborative autoethnography of the process of having graduate students conduct OSCQR reviews of courses that they had themselves had most likely taken as a learner. Content-wise the material should have been similar even if the instructors and modalities were potentially different.  Well, the Fall 2019 semester got very busy, and then we've been living in COVID world since 2020. Additionally, those students graduated and moved on with their professional lives, so such a paper is no longer possible. I was considering using an autoethnographic approach just with my own reflections on the process (after all, I still have most of my notes), but I've got several other irons in the fire, and I am not sure how useful it would be to go through the process of trying to find a journal to get a peer-reviewed version out.  Recently, I was inspired by Maha though, and her blogging of an unpublished paper, that I thought I would give this approach a try.  So here it goes!

The Need

It's been my observation over the last 15 years in higher education that there are many silos that just don't connect.  One might expect silos to exist between departments, but through interactions with various departments I've come to the conclusion that many faculty know what they teach and have a vague idea of what other courses exist in their department, however, they don't have deep knowledge of what goes on in those courses. Additionally, even if faculty are "peer reviewing" a fellow colleague's course (including courses of colleagues in other departments), there is a reluctance to address issues with pedagogy or material because of "academic freedom" 🙄. Even things like accessibility get rolled into the avoidance technique of academic freedom, which makes course improvement an issue. I suppose there is a curriculum committee in each department for this, but as a designer every time I've brought it up with academic departments I always get the audible grunts or roll eyes.  No one wants to peer review someone else's course, even with a rubric.  In any case, for this project, I was interested not in what course materials faculty were using in their courses, but in things like the alignment of objectives and readings and activities (making sure the connections were clear to learners), accessibility, design, and tool usage. My main focus was the student experience, and the goal was to present findings and recommendations in the Fall semester. This would be a good opportunity to open up a dialogue between the faculty on course improvement and cross-course communication in general.

The Assistants

The graduate assistants who were my reviewers for this project were students 25-30 years old, who were mostly done with their degrees, so they had experienced most of the curriculum that they were reviewing from a student lens already.  Some courses were new to reviewers. None of the assistants had any instructional design experience and their pedagogical training only included face-to-face contexts.  My hope in training them on OSCQR during this review was that they would also be able to take what they learned, both from OSCQR and from reviewing all virtual classes into their face-to-face classrooms (and virtual classrooms if the need arose). Each assistant had 10-12 hours per week of work (if I remember correctly).

The Process

The process started with a virtual training session on OSCQR 3.0. This took place over zoom since, even in 2019, we had a hard time getting everyone to campus at the same time.  It was also not necessary to meet face-to-face for this. I walked the three assistants through the OSCQR rubric and its annotations and provided a quick instructional design for online learning BootCamp. This was the highlights of the highlights version of ID for OL.  To demonstrate the rubric I used one of the soon-to-be-reviewed courses as an exemplar.

There were 18 courses to be reviewed.  These were the most commonly offered courses in the department, so they represented the biggest bang for time spent.  With two reviewers per course, this meant that each reviewer would review 12 courses over the summer.  This also meant that we were reviewing about one course each week of the summer, and once a week we tried to have a debrief about our findings. I was available for questions throughout the week. At the conclusion of this review, each of the 18 courses had qualitative feedback from the three reviewers. 

What worked in the training and what didn't?

One of the areas of improvement is definitely the demo course used to train the reviewers on OSCQR. I didn't have much of an opportunity to create a sample course to run the training, so we all evaluated a course from the pool of courses that were going to be reviewed that summer. I think there are pros and cons here.  One pro is that you tend to get one course more under the microscope than what you might normally have, but on the other hand, you risk reviewing every course in that collection similarly to that initial course, so reviews of courses might end up recommending that courses look like that first one (if the reviewers thought it was particularly well designed). 

In retrospect, I should have used another course, for example, I could have reached out to a colleague in another department to see if they'd volunteer their course as a course to train on.  I did end up using one of my old INSDSG courses to provide exemplars or alternate ways of addressing elements of the different categories of OSCQR. I think the zoom training session worked well, and the weekly debriefs worked well enough. If I were to do this again, I'd formalize the debrief a bit, maybe ask reviewers to show and tell some things that they felt worked rather well in courses that they reviewed and any questions they may have had.  I might also include a weekly reflection.  

One of the things I wanted to get to with some sort of ethnographic component in the original project was to see if student reviewers had any trepidations about reviewing courses in their department, if they felt like they needed to be positive, or if they felt that there might be fear of retaliation. I didn't sense a lot of this, but it would be good to have some data.

Finally, I think that 18 courses was a little much for the reviewers we had available.  I think the reviews felt a little bit like a conveyor belt rather than an opportunity for review and conversations, which I hoped to foster.

How did the courses fare?

Most courses made it through the OSCQR review with minor or moderate changes being suggested. I think that this is partly a sign of the maturity of the program (courses have undergone many revisions over the last 15 years), and partly due to faculty familiarity with things like Quality Matters and OSCQR. Also, in order to teach online (at least back then), faculty needed to complete some coursework to prepare them to teach online. Additionally, our ID group had begun an accessibility campaign the year prior to the review, so I have a sense that many of the inaccessible elements were handled then.  We did find some accessibility issues, but I think there would have been more had our colleagues in ID not been proactive with their initiatives. 

Some of the things that came up as elements for review included onboarding information, including technological requirements for the courses, links and information about campus services (library, accessibility office, tech support, etc.), contact info for the department,  and of course accessibility of attached documents and presentations.  My two big takeaways here are these:

1) Some common elements of courses (such as access to campus services, department contacts, and so on), can go into a template that every course can use.  While I don't think one template could apply to every single course a department uses, at the very least some common things that you want every student in the department to have and know should be included. Each onboarding for a class will vary depending on the topic, the course, and the instructor, but some things can be standardized.  I think having faculty develop their own department's template(s) would go a long way to help learners recognize the signposts in each course so that they can wayfind with more ease, yet still retain faculty voice in the decisions made in those templates.

2) Accessibility is an ongoing effort! It needs to be baked in, not sprinkled on (as my old friend Margaret used to say).  Even though we worked hard in 2018-2019 to address accessibility issues, there were still elements of courses that were inaccessible. I think that this is one of those battles that require a few stakeholders to be involved.  The faculty member should certainly strive to make the content as accessible as possible right from the start, but maybe there could be a team in the department or the college that helps out with ensuring last mile compliance. Automation has certainly helped a lot, for example automatically generated closed captioning, but those should be verified by a human.

Reflection: Technolutionism and Faculty Learning Communities

Over the past decade (or more), our ID group has been great! They've nurtured faculty who've been teaching online (at least from what I can see from my end) but there is a certain spirit that seems to just stick around. That is a faith in technosolutionism.  From automatic AI transcriptions, to "plagiarism detection" tools like Turn It In, to remote proctoring. From chats with other colleagues, some departments seem to see cheaters and plagiarizers everywhere.  This isn't healthy.    Another issue arises with adjunct faculty. There needs to be a way to welcome our adjuncts into the overall discussion and training around teaching and learning, but it can't be uncompensated. Full-time faculty can count this kind of thing toward their PD and be paid for it, whereas adjuncts are only paid for the time they are in the classroom.  If we want to include our adjunct colleagues into these discussions (which in turn translate into classroom pedagogies) we need to compensate them for their time, and encourage (or require?) participation in faculty learning communities, both intra- and inter-departmentally. I also think it's a good idea (at least for graduate students who will be going into teaching after they graduate) to prepare them on how to review courses and offer constructive feedback, even for people who they know.  I am not sure how to get current faculty members unstuck from the fear/concern of making suggestions to their colleagues (you know...because of "academic freedom"), but maybe we can break that cycle with the next-gen of teachers and IDers.

That's it for now.  Your thoughts?

Stacks Image 20


 Aug 2022 (1)
 Jul 2022 (1)
 Jun 2022 (1)
 May 2022 (2)
 Mar 2022 (2)
 Dec 2021 (2)
 Oct 2021 (1)
 Sep 2021 (1)
 Aug 2021 (2)
 Jul 2021 (1)
 Jun 2021 (1)
 Nov 2020 (1)
 Oct 2020 (1)
 Sep 2020 (1)
 Jun 2020 (2)
 May 2020 (2)
 Mar 2020 (1)
 Jan 2020 (3)
 Nov 2019 (2)
 Sep 2019 (1)
 Aug 2019 (1)
 Jun 2019 (1)
 May 2019 (1)
 Apr 2019 (1)
 Jan 2019 (5)
 Dec 2018 (1)
 Nov 2018 (2)
 Oct 2018 (2)
 Jul 2018 (1)
 May 2018 (1)
 Apr 2018 (2)
 Mar 2018 (2)
 Feb 2018 (1)
 Jan 2018 (1)
 Dec 2017 (2)
 Nov 2017 (1)
 Oct 2017 (2)
 Sep 2017 (2)
 Aug 2017 (2)
 Jul 2017 (4)
 Jun 2017 (7)
 May 2017 (3)
 Mar 2017 (4)
 Feb 2017 (5)
 Jan 2017 (5)
 Dec 2016 (9)
 Nov 2016 (1)
 Oct 2016 (6)
 Sep 2016 (4)
 Aug 2016 (7)
 Jul 2016 (8)
 Jun 2016 (9)
 May 2016 (10)
 Apr 2016 (12)
 Mar 2016 (13)
 Feb 2016 (7)
 Jan 2016 (11)
 Dec 2015 (10)
 Nov 2015 (7)
 Oct 2015 (5)
 Sep 2015 (8)
 Aug 2015 (9)
 Jul 2015 (7)
 Jun 2015 (7)
 May 2015 (15)
 Apr 2015 (2)
 Mar 2015 (10)
 Feb 2015 (4)
 Jan 2015 (7)
 Dec 2014 (5)
 Nov 2014 (13)
 Oct 2014 (10)
 Sep 2014 (8)
 Aug 2014 (8)
 Jul 2014 (5)
 Jun 2014 (5)
 May 2014 (3)
 Apr 2014 (4)
 Mar 2014 (8)
 Feb 2014 (10)
 Jan 2014 (10)
 Dec 2013 (4)
 Nov 2013 (8)
 Oct 2013 (6)
 Sep 2013 (10)
 Aug 2013 (6)
 Jul 2013 (4)
 Jun 2013 (3)
 May 2013 (2)
 Apr 2013 (8)
 Mar 2013 (4)
 Feb 2013 (10)
 Jan 2013 (11)
 Dec 2012 (3)
 Nov 2012 (8)
 Oct 2012 (17)
 Sep 2012 (15)
 Aug 2012 (16)
 Jul 2012 (19)
 Jun 2012 (12)
 May 2012 (12)
 Apr 2012 (12)
 Mar 2012 (12)
 Feb 2012 (13)
 Jan 2012 (14)
 Dec 2011 (19)
 Nov 2011 (21)
 Oct 2011 (31)
 Sep 2011 (12)
 Aug 2011 (8)
 Jul 2011 (7)
 Jun 2011 (3)
 May 2011 (2)
 Apr 2011 (8)
 Mar 2011 (5)
 Feb 2011 (6)
 Jan 2011 (6)
 Dec 2010 (3)
 Nov 2010 (2)
 Oct 2010 (2)
 Sep 2010 (4)
 Aug 2010 (9)
 Jul 2010 (8)
 Jun 2010 (5)
 May 2010 (4)
 Apr 2010 (2)
 Mar 2010 (3)
 Feb 2010 (7)
 Jan 2010 (9)
 Dec 2009 (5)
 Nov 2009 (9)
 Oct 2009 (13)
 Sep 2009 (13)
 Aug 2009 (13)
 Jul 2009 (13)
 Jun 2009 (15)
 May 2009 (15)
 Apr 2009 (14)
 Mar 2009 (13)
 Feb 2009 (10)
 Jan 2009 (12)
 Dec 2008 (6)
 Nov 2008 (8)
 Oct 2008 (2)
 Jul 2008 (1)
 Jun 2008 (6)
 May 2008 (1)
Stacks Image 18