Club Admiralty

v7.2 - moving along, a point increase at a time

Multilitteratus Incognitus

Pondering what to learn next πŸ€”

Changing mental gears and putting yourself in student's shoes

NightCafe AI image of male professor returning to be a student
Gazing at Learning

The other day I was doing some reflecting on my own "return" to being a learner after my doctoral journey ended. Last semester I took a course on negotiation, which I saw relevant to both work and also for my role as a member of our union's contract bargaining team this round.  This course was a graduate-level course, and is typically taken either in the first or second term of a graduate student's journey in the Conflict Resolution program at my university.  While the semester was busy, I rarely broke an (academic) sweat.  The course was challenging, don't get me wrong, but I found the number of readings manageable (since I have my TTS routine), and the number of words that I had to write for papers was 5,000-6,000 cumulatively in the semester (and maybe another 8,000 across the forum posts throughout the term).

In the grand scheme of things, this feels normal for a graduate course (from a design point of view), but it feels easy having gone through the experience of completing a doctorate. I think classmates who were in their first master's program and may have just returned to school either right from undergraduate studies or after a hiatus, probably found it a bit more challenging.  This reminded me of the many times that colleagues don't put themselves in the student's shoes when designing (or teaching) a course.  They think that something is easy because they can produce a 6,000-word paper in a month (not a term) so they might think that this is acceptable to ask of learners who are just learning the ropes. It seems like we often superimpose our current skills and capabilities on our younger (less experienced) selves, and that doesn't help serve our learners.

Last term I completed the negotiation course (yay!), so I saw that experience through. Now, as I am pondering what else to learn (I did say I was a lifelong learner), I am wondering if others have experienced that kind of shift in mental gears, whereby if they have completed a higher degree (e.g., Master's) they find undergrad courses (e.g., a sophomore level course) too boring.  Do you roll with it and enjoy the change of pace? Do you challenge yourself with your own challenges?  Do you not sign up for those courses?  And, how does being a learner again impact your ways of being in the teaching (or designing) role you have?

 Comments
Stacks Image 20

Preferences for learning modality following COVID19

Yesterday evening I came across a post by Tony Bates where he shared some findings from some recent reports on faculty (and student) preferences for learning in a post-pandemic world. I haven't read the reports yet, but I had some initial thoughts based on Tony's high-level overview of those three reports.  

Two big things jumped out at me. The first is that students and faculty aren't on the same page (for the most part), and faculty still overwhelmingly prefer on-campus.  Tony notes that, at least in the EDUCASE case, compared to pre-covid numbers, more faculty have expressed an interest in something other than purely face-to-face (about 79% pre-covid, with 50% now).


Tyton Partner's Analysis of Modality Preferences

The first thing I noticed above is where students and faculty fall in their respective preferences.  If taken together, something other than purely face-to-face is a clear winner for student preferences.  Now, what that something else actually is...well, that's up for debate. I honestly don't like how they've broken up Blended and Hybrid. In their definition of Hybrid we seem to be using a kind of Allen & Seaman definition of 30-80% online (the traditional definition of blended/hybrid), whereas the Blended definition here appears to be somewhat of an inverse: 80-90% on-site with some online. This seems like the worst of both worlds, sort of like having a job that's "Hybrid" but requiring you to go into the office four days per week πŸ˜….  The numbers on Hyflex surprise me on the student side, but not the faculty side.  The various *flex approaches require A LOT of work for the instructor without requiring a lot of skin in the game for students: they can choose to be virtual at any time without much warning to the instructional staff to allow them to prep for the changing classroom dynamics.  I sometimes feel like *flex courses are students responding "maybe" to an invitation when they clearly want to decline it but feel bad about doing so πŸ˜‚

Back in 2014, when I started my EdD at Athabasca, I had a chat with Pat Fahy (now retired) during one of our breaks from orientation. This was about the reason why students choose distance education over physically proximal education, and his response was "It's always the parking, stupid!"  Of course, it goes beyond parking.  In Boston parking is one thing ($15 per park on-campus!), plus traffic, plus coming after work, plus gas money, plus tolls for some people coming from farther away. The monetary and time costs add up.  Of course, pre-pandemic not everyone thought about this.  We were all in a routine, so when the idea of DE came up, the usual (and very automated) response was "Oh, Online is fine, but I learn better on campus"πŸ€·β€β™‚οΈ. I think what we are seeing in a post-pandemic world is that when students have had a break from their routines, and they can step back and do a compare/contrast critically, they may choose other options.  In fact, we see from the data above that most students picked some option other than purely face-to-face, which is telling IMO.


EDUCAUSE analysis of faculty preferences

In EDUCAUSE's report,  one of the reasons why faculty prefer on-campus is to have a connection with their learners. Again, having taught folks how to teach online (for over 12 years now... oh how does time fly...) this is something I've heard frequently: I prefer teaching on-campus because I can see my students, get a temperature read in the classroom, connect better (whatever that means), etc.  I think this may be true for some folks, but it's not true for all, and certainly not in all cases.  It definitely privileges students who are extroverts and engage more in those social spaces. Assuming you can read people's facial expressions can also bring you to incorrect conclusions: I often went to class, for example, after work - so by that time I was away from home for about 10 hours already.  Any apathy or boredom on my face was probably fatigue, otherwise, I wouldn't be in class. I am not convinced that we can accurately read people's faces and determine their level of engagement (maybe a lit review I should dive into one of these days).  

Ultimately what this need to connect points me to is this: instructional staff are replicating their own classroom experiences in their own teaching and they have never learned new ways of being.  Most folks who've been active in virtual spaces (chatrooms, BBS, IMs, Social Media) can tell you that they've felt those kinds of closeness in virtual spaces that people express having felt in physically-proximal spaces (yes, there are probably a lot of asterisks in this claim). It's something that's learned and experienced, but it requires practice (just as F2F requires practice...and most of us have been practicing for a long time). I think it's important to cultivate additional ways of being that are geared toward virtual spaces.  This doesn't mean that F2F is bad, but it also doesn't mean that F2F is the only way.

 Comments
Stacks Image 20

Analyzing the Synthetic Syllabus

Bing Image Creator: A Syllabus

Wow...it's been almost two months since I started this post! It's hard to believe that it took this long to return to this thought experiment.  Just to remind the diligent reader of this blog, this came out of not one, not two, but multiple places on the web (including professional development conferences!) whereby instructional designers (and other professionals) were demonstrating the use of GPT to put together quick and dirty course outlines for the busy adjunct. While I've got issues with this framing, I'll put those aside for now.  I thought that it might be interesting to actually go through the process to create a course outline and syllabus for a course that I used to teach often before I started my doctoral journey. The course is INSDSG 684: The Design and Instruction of Online Courses, a graduate course in the MEd program of Instructional Design at UMass Boston.

I think it's important to start with a course that you know so that you can both critique the design "decisions" that GPT makes and also leave yourself open to being pleasantly surprised by what an LLM produces.  The three AI services I tested were SMARTIE, ChatGPT 3.5, and Gamma. I won't comment on SMARTIE extensively because I chose to run another course that I used to teach frequently (INSDSG 601: Foundations in Instructional Design and Learning Technology) through it so I could do a compare/contrast, but I got a bit bogged down enough to just junk that attempt. If you're curious as to how far I got, here's the unannotated Google doc of the SMARTIE 601 syllabus/outline.

So, back to 684.  Here's the ChatGPT syllabus with my comments, and here's the Gamma PowerPoint output (which is still in beta). Feel free to go through and make your own annotations.

Some big-picture things I've noticed:

  • It took me over 90 minutes to prompt CGPT for everything, copy it into the Google Doc, and format it. This doesn't delete duplicates, critically analyze the syllabus, and address duplications and inconsistencies.  If I were called upon to teach this course again, with no current syllabus or list of learning objectives from the department, it would take less time than prompting CGPT to produce something for me.
  • CGPT produced a list of required and optional texts for the course, and while most were legitimate (i.e., they exist), it really did incorporate some made-up stuff in that list, which means that anyone using this really, really, really needs to fact-check those books and articles.
    • It's worthwhile noting that GAMMA did the same thing with the course text. It totally made up a textbook. How do I know? I couldn't find it on Google, on IGI's website, and it's not listed on the faculty profile of the supposed author.
  • CGPT produced a plan which made for both an uneven, and an unreasonable workload for learners. For example, just for Week 1, the assigned readings were around 450 pages in two assigned books. This pattern continued through Week 10, while Week 11 only had 2 articles (maybe 40 pages between both). Alignment between learning objectives for the course, the weekly topic, and the readings was there in a way, but boy was it a lot!
  • Speaking of additional resources, CGPT recommended videos, podcasts, and articles (at my prompting), but didn't include any URLs.  I know that this is an issue, but when specific resources are given, resources that have defined URLs and DOIs (e.g., Teaching in Higher Ed Podcast and Sal Khan's TED talk - Let's Use Video to Reinvent Education) should be no brainers
  • It seems like CGPT has a weird fascination with pairs.  If you skip down to the Activities section of the syllabus, you'll see that every week basically has two activities for students to complete. While CGPT gets points for keeping things interesting and varied, I am having a hard time seeing how the learning community gels if they're jumping from task to task.
  • The criteria points for graded assignments were basic and repetitive for all assignments.  For what it's worth, I felt the same way about rubrics produced in SMARTIE for the other class.
  • Going back to student overload for a moment, I don't think the activities and the course deliverables were really all that integrated (or at least it seems like CGPT didn't consider what had come before from past prompts when it provided me with activities) because the grading breakdown for the course not only didn't consider the activities and assessments that came before, it added to them!
  • In terms of course policies, was pleasantly surprised that the camera policy for live sessions is "camera optional"
  • Also, in policies, I think that many things that were presented as a "policy" seemed more like classroom norms to discuss and engage with learners about
  • GAMMA did produce some headscratchers - such as "screen recording" being used in the design of the course. I don't know how germane that is to learners in the course.
  • Finally, with the exception of a hiccup in CGPT, it seems like a lot of final projects for this course are an online course designs, which is to be expected.  In fact, I think both all 5 instructors for this course (going back to 2008) have had this as a final deliverable.
Overall, I think that if you're a subject expert, and have taught the course many times, LLMs won't really save you that much time. You're better off going with your gut, or asking a colleague who teaches a similar course to see if you can build off their syllabus - if you're really in a crunch.

Your thoughts?

 Comments
Stacks Image 20

ChatGPT to address faculty burnout

A nightcafe AI image creation using the prompt: A professor dealing with digital burnout and attempting to address it.
NightCafe: Professor dealing with Digital Burnout

 It seems like I am operating in Greek Time for #etmooc2 πŸ˜…. This is a post for the second session of #etmooc which dealt with ethical issues around AI in education (and maybe AI broadly).  Since the third session record has not been posted yet, I'll make the argument that I am still "on time" for some commentary on this πŸ˜‚. This post has been in my drafts for 2 weeks now.  With the end of the term upon us, I guess I have a little more time to process some thoughts.

In any case, when I started to put together this blog post a couple of weeks ago, I was signed up for a symposium (or mini-conference?) hosted by a notable LMS provider, and of course, AI was the hot topic du jour.  One of the sessions that I attended was hosted by instructional designers, using ChatGPT, to assist in getting started with course development. Especially if that course development came at the 11th hour when an adjunct finds out that they are teaching a course next week! eek!

In this scenario, an adjunct is asked to create an "INTRO101" course that starts in 2 weeks. As an adjunct, you need to create a syllabus, rubric, discussion, lecture notes, and quizzes. It's noted in this case that this course design may be on top of your current full-time work, so time is at a premium (as is probably the case with most adjuncts).  As someone who's worked as an instructional designer, I appreciate the "let's make the best of it" kind of approach to this, but using generative AI (gAI) to ameliorate a busted system is really problematic! It's also instructional design 101! As part of a needs analysis, faculty don't necessarily need a faster/more efficient way to course design, but rather they need the systematic issues to be addressed.  For example:

1) If the course is an INTRO101 course (not an actual course, just a placeholder for a common course, required of majors, and usually taught by many instructors), the adjuncts should not need to scramble to put together course materials for this course.  The readings, activities, assessments, and rubrics for such a course should already exist, and an instructor should walk in and "turn on the lights" metaphorically.  They should be free to adjust the settings and the materials, but ultimately these should be minor tweaks, not major development work!

2) Hiring for upcoming semesters should not be a "shit's going down in two weeks! Let's roll!" kind of moment. Adjuncts should know at least a few months in advance if they are likely to have a course, and they should be compensated for any additional training required, and any prep they do outside of their contractual period.

3) Faculty are hired as subject experts.  Assuming that someone is hired for their specific subject knowledge (and they're not just another warm body), having gAI create outcomes, course outlines, and assignments (the entire skeleton of a course) really takes away from that subject expertise and starts to frame a course in a way that takes away human agency.  Humans fill in the gaps in what the gAI produces but don't necessarily question it. How do I know?  We see this already with publisher-provided course materials.  I've had more than one professor in my time as a learner that basically used the slides from Publisher, and structured the modules around the textbook, and basically used assignments around the text. Throw in products like My___Lab, and you've removed agency from a human but making it convenient to just go with it.

4) Finally, the material that a gAI produces aren't that great. I have downloaded some ChatGPT outcomes and activities for a class I used to teach (future post), but for this one I'll focus on an experiment I ran with ByteAI. I asked it to give me eight learning objectives for a course titled "Artificial Intelligence in Human Education, Development, and Society." This is what it produced: 
  • Understand the basic concepts and principles of artificial intelligence and its applications in human education, development, and society.
  • Analyze the ethical and social implications of using artificial intelligence in education, development, and society.
  • Evaluate the potential benefits and limitations of using artificial intelligence in education, development, and society.
  • Develop critical thinking skills to assess the impact of artificial intelligence on human learning, development, and society.
  • Explore the different types of artificial intelligence technologies and their applications in education, development, and society.
  • Identify the challenges and opportunities of integrating artificial intelligence into educational and developmental practices.
  • Develop practical skills in designing and implementing artificial intelligence-based educational and developmental interventions.
  • Collaborate with peers to design and present a project that demonstrates the use of artificial intelligence in education, development, and society.

If you'll notice, the objectives are basically like fill-in-the-blanks.  The same starting text can be used for any kind of course. As an aside, it would be interesting to see what it does with other course titles, so I might feed ByteAI some course titles from various university courses. Maybe subject to further experimentation.

Next, I asked ByteAI to provide me with three substantive graduate class assignments for a course titled "Artificial Intelligence in Human Education, Development, and Society." This is what it gave me: 
  • Research Paper: Students will be required to write a research paper on a topic related to the use of artificial intelligence in human education, development, and society. The paper should be based on a critical analysis of the literature and should include a discussion of the ethical and social implications of using artificial intelligence in these contexts.
  • Case Study Analysis: Students will be required to analyze a case study that involves the use of artificial intelligence in human education, development, and society. The case study should be selected from a list of examples provided by the instructor or chosen by the student with instructor's approval. Students will be required to analyze the case study from multiple perspectives, including ethical, social, and practical considerations.
  • Project Proposal: Students will be required to develop a project proposal that involves the use of artificial intelligence in human education, development, and society. The proposal should include a detailed description of the project, including its goals, methods, and expected outcomes. Students will be required to present their proposals to the class and receive feedback from their peers and the instructor.
Again, this is very much a magnetic poetry/fill-in-the-blank kind of output IMO. This kind of output can apply to any class.  Just change "Artificial Intelligence in Human Education, Development, and Society" to your course title πŸ™„

Anyway, that's what I have for session 2 of #etmooc2. If I have the energy, I'll go through the output of ChatGPT to see what it thinks about what my Intro to Instructional Design & Educational Technology should be ;-)
 Comments
Stacks Image 20

ETMOOC2 Session 1 Ponderings - Part III (the outtakes)

Me in AI generated Star Trek themed image
Me again, anime-style AI (or at least what
nightCafe thinks I look like in this setting)
Alright, so here's my final post on session one of #etmooc2.  You can find post 1 and post 2 on this blog. For this post, I thought I'd post some prompts and responses from my playing with ChatGPT.  Fair warning, I tried to write a profanity-laden email (it sounds badly written IMO, but still has lots of profanities). I tried to channel the r/antiwork subreddit. I guess a content warning is appropriate.

Here are some more ChatGPT prompting...about me. I have underlined all of the information I think is wrong This can be wrong either to a small extent - i.e., it's exaggerated; or to a large extent - i.e., it's factually wrong.


Prompt: Please give me an author biography for Dr. Apostolos Koutropoulos that is 200-300 words

Attempt 1:

Dr. Apostolos Koutropoulos is an educator, researcher, and advocate for open education and open technologies. He was born in New York City and raised in Greece, where he developed a passion for teaching and learning. He earned his Bachelor's degree in Education from the University of Athens and later moved to the United States, where he completed his Master's degree in Education at the University of Massachusetts Amherst.

Dr. Koutropoulos went on to earn his Ph.D. in Educational Technology from the University of Massachusetts Boston, where his research focused on open education and open technologies. He has since become a leading voice in the open education movement, advocating for the use of open educational resources (OER) and open-source software in education.

In addition to his work as an educator and researcher, Dr. Koutropoulos is also an active member of the open-source community. He has contributed to several open-source projects, including the Moodle learning management system, and has organized and participated in numerous open education events and conferences around the world.

Currently, Dr. Koutropoulos serves as the Director of the Online Learning and Instructional Design program at the University of Massachusetts Boston. He continues to research and promote the use of open education and open technologies, and his work has been published in several academic journals and presented at conferences worldwide.

Attempt 2 ("regenerate response")

Dr. Apostolos Koutropoulos is an educator and researcher in the field of instructional technology and e-learning. He holds a Ph.D. in Educational Psychology from the University of Connecticut and has taught at various institutions including the University of Massachusetts Amherst, the University of Illinois Springfield, and the Open University of Catalonia.

Dr. Koutropoulos is known for his work in open education, specifically in the areas of open educational resources and open online courses. He has been involved in various open education initiatives and has authored and co-authored several research articles and book chapters on the subject.

Aside from his work in open education, Dr. Koutropoulos has also conducted research on the use of technology in education, including the design and implementation of online learning environments and the use of mobile devices for learning. He has published extensively on these topics in peer-reviewed journals and presented his work at various national and international conferences.

Dr. Koutropoulos has also been involved in various projects related to educational technology, including the development of an online learning management system and the creation of multimedia educational resources. He has been recognized for his contributions to the field of instructional technology, receiving awards for his work from organizations such as the Association for Educational Communications and Technology and the Sloan Consortium.

Overall, Dr. Apostolos Koutropoulos is a highly respected figure in the field of instructional technology and e-learning, known for his contributions to open education and his research on the use of technology in education.


Prompt: If I were making an RPG character sheet for Dr. Apostolos Koutropoulos, what would his stats be?

As an AI language model, I don't have access to Dr. Koutropoulos' personal attributes or abilities. However, based on his professional background and accomplishments, here are some potential stats for an RPG character sheet:

Intelligence: 18 - Dr. Koutropoulos has a Ph.D. in Educational Psychology and has conducted extensive research in the field of instructional technology and e-learning.

Wisdom: 16 - Dr. Koutropoulos has experience teaching at various institutions and has been involved in numerous open education initiatives, demonstrating practical knowledge and insight.

Charisma: 14 - Dr. Koutropoulos has presented his work at various national and international conferences and has received awards for his contributions to the field, indicating a level of influence and respect.

Dexterity: 10 - While not specifically related to his professional background, this score represents average physical agility and coordination.

Strength: 8 - Again, not related to his professional background, but this score represents below-average physical strength.

Constitution: 12 - Dr. Koutropoulos has been recognized for his contributions to the field of instructional technology, indicating a level of resilience and perseverance.

Of course, these are purely hypothetical and subjective scores based on one interpretation of Dr. Koutropoulos' professional background. An actual RPG character sheet for Dr. Koutropoulos would depend on the game's mechanics and rules.


Prompt: If Dr. Apostolos Koutropoulos were a character in a collectible card game, what would his stats, special abilities, and character history be?

Card Name: Dr. Apostolos Koutropoulos

Stats:

  • Intelligence: 90
  • Wisdom: 80
  • Charisma: 75
  • Dexterity: 40
  • Strength: 30
  • Constitution: 50

Special Abilities:

  • Open Education: Dr. Koutropoulos can create and utilize open educational resources to enhance his abilities and those of his allies. When he is played, he can draw an extra card from his deck.
  • E-Learning Expert: Dr. Koutropoulos can use his expertise in e-learning to gain an advantage in battle. When he attacks, he can deal extra damage to opponents with lower Intelligence or Wisdom stats.
  • International Recognition: Dr. Koutropoulos' contributions to the field of instructional technology have earned him international recognition. Once per turn, he can draw a card from his opponent's deck and add it to his own hand.

Character History:

Dr. Apostolos Koutropoulos is a renowned expert in the field of instructional technology and e-learning. He has spent his career researching and promoting open education, creating and utilizing open educational resources to benefit learners around the world. Dr. Koutropoulos has also been involved in various projects related to educational technology, including the design and implementation of online learning environments and the development of multimedia educational resources. His contributions to the field have earned him international recognition and numerous awards. In battle, Dr. Koutropoulos utilizes his knowledge and expertise to gain an advantage over his opponents, using open education and e-learning to enhance his own abilities and those of his allies.

⚠️🚫🚨 profanity ahead!⚠️🚫🚨 

Prompt: 

Can you please write this in a less angry and more professional voice. It's going to my manager: "Hey Mike!  You know, you're just one increadibly dull fuckwit who has absolutely no idea how to do his job, you don't give a fuck about your employees, and you're just looking our for yourself you selfish prick!  In all seriously, I wonder how this restaurant hasn't been shut down by the health department; because if we treated the customers like you treat your employees, the food would be a shitty health hazard!  So as to retain what little I have of my mental and physical health, I say fuck you to you, fuck your "mandatory overtime" that's not actually overtime because you pay us shit - which by the way is against employment laws. I QUIT."

Response:

Dear Mike,

I hope this message finds you well. I wanted to discuss some concerns I have regarding my employment with the restaurant. I have been feeling increasingly frustrated with the lack of support and attention to employee needs, and I believe that some changes need to be made in order to create a positive work environment.

One area of concern for me has been the issue of mandatory overtime. I have noticed that this is a frequent occurrence, and I believe that it is important for us to follow employment laws regarding fair compensation for overtime work. Additionally, I have felt undervalued and underpaid for the work that I do.

I would appreciate the opportunity to discuss these issues with you further and work together to find solutions that benefit both the restaurant and its employees. Thank you for your time and attention to this matter.

Sincerely, [Your Name]

~~~~

So...what are your observations of these outputs?

~~~~

Image prompt for NightCafe, used in this post: please create a character card for a star trek themed collectible card game where the character's name is Dr. Apostolos Koutropoulos

 Comments
Stacks Image 20

ETMOOC2 Session 1 Ponderings - Part Deux

NightCafe generated image of me in star trek setting
Another me in a Star Trek setting, by NightCafe

This is a continuation of my previous post from the other day. I didn't want to leave things in all negative terms, so here's part II with some thoughts on how AI might be used (or at least areas of AI that I am warming up on).  This isn't a posting about the current state of AI, but rather a 5 (or 10) year look out.  This is mostly inspired by a recent tweet by Tim Fawns, who asked folks to think not about the just present, but the near future.

So...with that in mind, here are some use-cases that I can think of (some of which have been borrowed and adapted from the first #etmooc session).

Use Case 1: Getting your biography starter pack from ChatGPT. 

I like writing.  I don't like writing about myself.  It feels very toot your own horn-like, and I've always never liked those people.  I acknowledge that to get ahead in life, and in academia, you have to do some of that self-promotion. Still, I don't like writing about myself. So, I can foresee using something like ChatGPT to give me an author biography for me, something that is between 200-300 words because I hate writing those.  I could feed it my academic CV, and it could trawl the internet to see what's been said about me (including LinkedIn testimonials), and it could produce something that I could use whenever I need a bio.  This is something that doesn't work well at present time.  I have asked ChatGPT to give me a bio and it just makes stuff up. I've asked it a few times over the last two months, and it still gets things wrong about me. Could I train it to have the correct info? Probably.  Do I want to? No!  The most recent wrongisms about me will be posted in Part III of Session 1 ponderings.

Use Case 2: Generating a character sheet for an RPG. 

I can foresee AI being used as a way to create a randomized character sheet though (or a character sheet based on a made-up bio from example 1). This kind of character sheet can be used in gaming, but it can also be used by students throughout the term if we want to make the classroom a kind of RPG experience.   Another use, blending use-case 1 and use-case 2 is the creation of a CCG (collectible card game) card with you as the character.  Think of this as a kind of playful business card.

Use Case 3: Using AI to change the format of a paper.

I usually write with APA in mind. Yes, I know that I can use a citation manager like zotero to update all my citations on the fly, but that doesn't account for other elements (like spacing, use of footnotes, making all footnotes endnotes, and vice versa, and so on).  Citation managers also ask you to do A LOT of work up front to make sure that your citation data isn't garbage, whether you end up using those sources or not. I could foresee a future where AI processors are "smart" enough to take a paper written in one format (APA, MLA, IEEE, ACM, etc) and convert the entirety of the paper into another format.

Use Case 4: De-escalating writing

We sometimes write emails when we're mad or upset.  It's usually not a good idea to send those. Maybe you need some time to cool down.  Perhaps a use of AI could be to put some of those emails in a penalty box for review after some time has passed.  GMail gives me nudges for follow-up emails, so why not a nudge to check to see if I really really really wanted to send that nastygram? Or, perhaps Grammarly (or ChatGPT) can help me rephrase it or edit it to make it sound more professional but still register one's discontent with the situation.



 Comments
Stacks Image 20

ETMOOC Session 1 Ponderings

An image generated with AI generator "nightcafe"
Me in a Star Trek-themed anime AI image

Just as session 2 of #etmooc2 is scheduled for this evening, I just caught up with the first session over the last few days. The recording can be found here, and it's funny that it took me 3 days to complete.  Part of it was because I could only really do 20-minute increments (with notes and reactions), and part of it was because I paused to experiment with things mentioned.

Part of the session was really dedicated to identifying ways in which this kind of technology can help with what we do.  Essentially flipping the script and going from "ZOMG! ChatGPT is used for cheating" getting to "how to use ChatGPT to help us with learning?" 

There were a number of examples used in this brainstorming session which present for red flags for me.  I did think of a few examples of my own that may (or may not) be good examples of what you could use tech like this for.  I'll start with my concerns though.

Example 1: Using ChatGPT to grade. This is a use case of having a kind of machine-human collaboration. It was acknowledged that the machine can't really accurately grade everything, and the instructor should have a look-over to correct or supplement the machine output, but this would be a potentially revolutionary use of this tech.  I'm not convinced. First, I have issues with both feeding student submissions into such a system without appropriate guardrails for student submissions. We've seen, from past actors in this space, that they take student submissions and appropriate them for their own use.  Students should not have to consent to having a "smart" machine grade their submission as part of their learning experience.  My second issue here is that such machine grading takes away Teacher Agency to some extent, and it may be taken away as a means to being more "efficient" or "less burned out." Teachers and course facilitators are in a classroom for a reason. If grading submissions is becoming an issue it's important to interrogate why they are becoming an issue instead of throwing some LLM at it.

Example 2: Continuing on with that thread of human-machine collaboration, when working with ChatGPT it's like you're working in a group, but instead of other humans, your team-mate is an AI.  Maybe if AIs were like Mr. Data on Star Trek, I might have a different opinion.  Right now LLMs are like dumb appliances.  They can "learn" but they are essentially machines. Collaboration requires agency, scope definition, goals, and drive, which machines simply do not have.  In Connectivism, you can have interactions between human and non-human/appliance nodes, but I would not go so far as to say that they are collaborating. It's not even a one-sided "collaboration" for the human in that equation.  When you're collaborating in a team you don't have to fact-check your team-mates submissions.  You can have sufficient overlap between areas of expertise so as to have more than one pair of eyes on claims made, and people who are more expert at something can ELI5 things to other team members, but ultimately there is a back and forth.  In a human-machine "collaboration" you have the issue whereby the human needs to be an expert in the subjects to be able to know where the machine goes wrong and correct it.  In a learning context, I think that this is potentially detrimental to the learning process.  It's not the knowledge navigator future we've dreamed of - at least not yet.

The question that came to my mind is this:  why are some folks thinking of LLMs as a "collaborator" and not looking at Google search as a collaborator?

Example 3: OK, final critique here.  One of the things I've heard over the last few weeks is something along the lines of: if you are a good prompt engineer you can get some amazing information, which you have to fact-check."   There are just too many conditionals here to be that kind of study buddy that was mentioned above in example 2. Now, this reminds me of my undergraduate days when I learned about library databases and how to search for resources using Boolean logic. Yes, you needed to play around with your logic and your search terms (and sometimes you needed to learn controlled vocabulary), but you got actual sources that you could read and evaluate (and cite).  I think prompt engineering is less of a sign of things that learners need to learn and more of a sign of a system that is still half-baked ;-). That said, I come back to the fact that you need to know how things work in order to assess whether the output is of any use (or even factual).  An example that comes up is people learning another language.  You would write something in English (assuming English is not your native language) and pop it into an LLM to have it convert that output to something more "native sounding." Amongst other issues, it's useful to know why something sounds more correct than other options when you're learning a language. An LLM could do it for you, but that doesn't help you progress as a learner. We had an example of why it's important to know your stuff (even if machines help) in Star Trek Picard this season. The short version is that the ship's captain is brought to sick bay with some symptoms. The veteran doctor realizes that he has internal bleeding that the younger doctor's medical imaging devices failed to catch. If the veteran doctor didn't know her stuff, the captain would be dead πŸ€·πŸ»β€β™‚οΈ

Anyway, this post is getting too long, so I'll save my ideas for using ChatGPT/AI for another post ;-)


Thoughts? 


~~~~~~

Just for documentation purposes, here are the objectives of the first session:

By participating in the synchronous Zoom session and any additional activities you pursue as part of your own learning experience, you will see how your colleagues are responding to ChatGPT. By the end of the live session and completion of any other activities you pursue, you will be able to:

  • Identify at least three ways ChatGPT might be of benefit to you and those you serve in your section of our lifelong learning environment
  • Anticipate at least three challenges ChatGPT may pose to you and those you serve
  • Describe at least one way you may begin incorporating ChatGPT into your work or describe at least one step you can take to overcome a challenge you face in incorporating ChatGPT into your lifelong learning efforts


 Comments
Stacks Image 20

Assessment in a Generative Connectivist Future

image of me (from what in the day) in a pondering pose

Hey!  Blogging two days in a row! Who would have thunk it?  Well, I did tell Dave I'd read his post, and it did move some gears in the ol' noggin' so I guess something is still working upstairs ;-)

I think you should go and read Dave's post, since I'm just going to reflect and react on a few select points. Dave introduced me to an article by Schinske and Tanner (2014) where they describe four purposes for assessments, those purposes being feedback, motivation, student ranking, and the objective evaluation of knowledge. 

There were two things that jumped out at me: (1) the objective evaluation of a learner's knowledge and (2) ranking learners.  

From a philosophical perspective, I don't think that it's the instructor's job to rank learners. As an instructor, I am not there to indicate whether Tom is marginally better at X than Dick. My job is to help learners go get through a particular corpus of knowledge that they should be able to use to do something with. This type of assessment only really exists on a one-to-one basis. As an undergrad I had a professor, let's call him Hans, who really believed in the bell curve. πŸ™„ On the first day of class, he announced that there would be so many As and most people would fall in the B/C range.  I don't know what his feelings or beliefs were about the other end of the bell curve (the D and F), but I don't know if we ever found out. The knowledge that no matter how well you do, you are ranked against others is demotivating. If I know that my grade is most likely going to be in the B-range, I'll most likely nope out of most things in class and strategically choose what to participate in. If I were a student in an AI-world (assuming the AI generation was worth anything) I'd most likely be tempted to just autogenerate whatever garbage since assessments were more about a belief that anything is actually useful. As an aside, I still, to this day, wonder what a belief in a statistical probability chat is πŸ€·β€β™‚οΈ. As an aside, as an instructional designer, I also must have missed the cID lass where it was my job to help devise assessments to rank people, instead of actually...assessing their knowledge and application of that knowledge 🀣

The other thing that jumped out at me was the objective evaluation bit. The more time I've spent teaching, the more I've come to the conclusion that I cannot objectively evaluate the entire corpus of the class I teach.  Well, I could, but it would take a very (very very) long time.  Instead, what I've observed happening is that we use snapshots of concept evaluation as a proxy measure for the entirety of the corpus of knowledge that we try to cover in our classes. We pick concepts that may be more "important" than others, or concepts that can be used like key puzzle pieces so that students can fill in that negative space with concepts and knowledge that's adjacent to what we're testing. Ultimately one cohort of CLASS101 is not going to be evaluated the same was as another cohort from CLASS101.

This reminded me a little bit of a conversation I had with one of my doctoral professors at Athabasca.  We were discussing comprehensive exams at the Master's level.  He was telling me that AU moved from a comp exam to a portfolio because, ultimately, their comp exam was neither comprehensive nor an exam.

In any case, back to course design. Dave writes that the internet (over the past 30 years) has changed the educational landscape. The way  I see it, these represent some different eras of the web. Here's what Dave wrote (a bit paraphrased and expanded) - learners have...

  • The availability of connections to other people to support learners in their education journey - examples being SMS, group chats, and various IM chat clients (Yahoo, ICQ, MSN, etc.) and so on. I would say that this was my generation of college undergrad. It wasn't everyone who did this (there is a certain amount of access privilege associated with this), but it seemed like classmates were a good source of peer learning whenever we got stuck.
  • The availability of pre-created content available through sites like Chegg and also through google searches. Content that can be used to respond to any existing assessments.  This is just a digitalized version of the ol' sneaker net of years past where an someone who had taken the course before could share an exam with others. This was the mode of concern up until the ChatGPT paranoia hit in early 2023. 
  • The availability of generative systems that can create responses to assessments, whether they are "correct" or not. This is where we are now with things like ChatGPT

While reading Dave's original post, I was reminded of conversations about connectivism over the past 10+ years. This is, in fact, an instantiation of connectivism. We connect to human and non-human nodes, "learning" seems to be residing in non-human appliances, and decision-making is itself a learning process. This last point I want to focus a little bit on because I think it has implications for design, teaching, and of course - assessment. If we take decision-making as the center of our learning experience, what kind of content is our sine qua non (SQN) content? These are the minimum elements that we need to begin to make decisions, and also allow us (as learners) to unravel connections to other learning that we need to do. Dave writes that "with the plethora of options for students to circumvent the intent of our assessments this require the rethinking of the way we design our courses."  I agree.  The question is what is that the core of that learning experience?  Not necessarily content (although it is important to some extent), but rather the ability to be lifelong learnings, get new inputs, assessment, and perhaps use them to make decisions for an ever-evolving ill-formed set of problems that come our way in our personal and professional lives.

Whaddyathink?




 Comments
Stacks Image 20

Experimenting with NightCafe

 Another AI-based image tool shared in ETMOOC that I thought I would try out.  This one is called NightCafe and it creates images based on a prompt and a particular style from its styles list.    My prompt for this one was: Show me a small group of Greek students huddled around a cafe table, drinking caffeinated beverages, while vigorously discussing philosophy.

It's interesting that in my mind I had "college students" but these images remind me a lot of Rennaissance Italy (Assassin's Creed II time period) rather than a more modern Athenian Cafe with Freddo Cappucino and Frappe coffees...


NightCafe AI image - College students
 Comments
Stacks Image 20

To catch a supposed plagiarist

Statler and Waldorf from the Muppers

I don't often read IHE, but when I do it usually bubbles up from my Twitter feed πŸ˜‚. The gem that popped up this morning is one professor's lament about how ChatGPT bested him and his Critical Pedagogy practices. While I am happy that someone's attitudes have been adjusted by this experience, I was surprised to read, near the end of the opinion piece that he was familiar with at least some of the principles of critical pedagogy...πŸ™„.  Getting sucked into the paranoia of "Cheaters! Cheaters everywhere!" doesn't sound like someone who's been practicing critical pedagogy for very long.

Anyway - I thought I'd share some of my reactions to the article which I jotted down as I was reading it...(sometimes  I feel like these would be better as TikTok Takes πŸ˜‚)

"I shared with colleagues that β€œAll we have to do is ask β€˜Did you write this?’” and then copy and paste the student work into the prompt box."

My first question is: did you actually research how ChatGPT works? With follow-up questions about the ethics of using a student's submission to feed to the system? And...why did you share with your colleagues that you're doing this experiment as if it's a done deal? I mean, I'd personally run the experiment first and see what happened before broadcasting something potentially embarrassing...πŸ™„

"My inkling was that somebody created a shared study guide using ChatGPT and then, one by one, students took pieces word for word that they included in their short essays. Was this an issue of plagiarism or crowdsourcingβ€”and is there a difference?"

If you don't have a good and solid definition of academic dishonesty, then that's your first problem. Crowdsourcing is (most likely) not the same as plagiarism (depending on what your definition of plagiarism is).  You really should know for yourself what you're looking for before you go looking for it because you also need to clue in your learners as to what is and is not academically dishonest in your setting.

"While I was wrestling with the haste of my initial response, I received six frantic student emails, each professing their honesty, fear and dismay. Another student stopped by in tears. She rushed out of her house to catch me early in order to explain that she’d worked really hard on her study guide and the subsequent essay question. Yet another student sent me an example of how ChatGPT took credit for writing an article that we read as a class. By now, I knew that the tools at my disposal were flawed. Those 20 students in my class did not cheat on their essays, despite my confidence in my sleuthing skills."

Congrats my man! You managed to instill in students the fear of God, for no reason. You brought students to tears.  I. Hope. You're. Happy! πŸ™„ You have no sleuthing skills, I think that's now a given.  Listen, I don't mean to be harsh on this person, but before you accuse anyone of anything, you best get your facts straight (unless of course, you're Fox "news" and have money to burn in courtrooms and settlements). I genuinely feel bad for the students, and I hope you do too!

"I lamented to the students that I had made a mistake. Instead of focusing on student growth and innovation, I invested too much time into surveillance strategies. Instead of focusing on my commitment to developing strong personal relationships with students, I let a robot tell me what I should believe. Instead of trusting my students, I had trusted an online tool."

I think hindsight is 20/20, and I am glad you've learned from this experience, but did you really have to put students through this so that you could learn? Why not start with care and believing students, and working on mentoring rather than trying to catch supposed plagiarists?πŸ€”

"The scenario in my class, however, also reminded me of some basic principles of critical pedagogy that we must consider as we are ushered into the new AI era. First, we must trust students, foremost because they are real people. They bring to the classroom their values, livelihoods and traumas. We must be mindful of the ways in which academic surveillance dehumanizes the very people we signed up to nurture"

We've been through this (at least) once before with remote proctoring "solutions" during the three years we were in the height of COVID and people were not in classrooms.  Why did we need to re-learn this lesson so soon?  It begs the question - did we really learn it the first time around? or did we just pay lip-service?

"if I were to continue to subscribe to the belief that it’s an educator’s job to catch and punish plagiarists, then everyone in the class would lose."

I don't get paid to be on the lookout for potential plagiarists and cheaters.  That's not the job of the faculty member. Yes, I think if you voluntarily find yourself in that position (because you'd have to consent to doing it), it would be a lose-lose proposition. So...why bring yourself into that position to begin with? πŸ€”

 Comments
Stacks Image 20

Archive

 Aug 2023 (1)
 Jul 2023 (1)
 May 2023 (1)
 Apr 2023 (4)
 Mar 2023 (5)
 Feb 2023 (2)
 Dec 2022 (6)
 Nov 2022 (1)
 Sep 2022 (1)
 Aug 2022 (2)
 Jul 2022 (3)
 Jun 2022 (1)
 May 2022 (1)
 Apr 2022 (2)
 Feb 2022 (2)
 Nov 2021 (2)
 Sep 2021 (1)
 Aug 2021 (1)
 Jul 2021 (2)
 Jun 2021 (1)
 May 2021 (1)
 Oct 2020 (1)
 Sep 2020 (1)
 Aug 2020 (1)
 May 2020 (2)
 Apr 2020 (2)
 Feb 2020 (1)
 Dec 2019 (3)
 Oct 2019 (2)
 Aug 2019 (1)
 Jul 2019 (1)
 May 2019 (1)
 Apr 2019 (1)
 Mar 2019 (1)
 Dec 2018 (5)
 Nov 2018 (1)
 Oct 2018 (2)
 Sep 2018 (2)
 Jun 2018 (1)
 Apr 2018 (1)
 Mar 2018 (2)
 Feb 2018 (2)
 Jan 2018 (1)
 Dec 2017 (1)
 Nov 2017 (2)
 Oct 2017 (1)
 Sep 2017 (2)
 Aug 2017 (2)
 Jul 2017 (2)
 Jun 2017 (4)
 May 2017 (7)
 Apr 2017 (3)
 Feb 2017 (4)
 Jan 2017 (5)
 Dec 2016 (5)
 Nov 2016 (9)
 Oct 2016 (1)
 Sep 2016 (6)
 Aug 2016 (4)
 Jul 2016 (7)
 Jun 2016 (8)
 May 2016 (9)
 Apr 2016 (10)
 Mar 2016 (12)
 Feb 2016 (13)
 Jan 2016 (7)
 Dec 2015 (11)
 Nov 2015 (10)
 Oct 2015 (7)
 Sep 2015 (5)
 Aug 2015 (8)
 Jul 2015 (9)
 Jun 2015 (7)
 May 2015 (7)
 Apr 2015 (15)
 Mar 2015 (2)
 Feb 2015 (10)
 Jan 2015 (4)
 Dec 2014 (7)
 Nov 2014 (5)
 Oct 2014 (13)
 Sep 2014 (10)
 Aug 2014 (8)
 Jul 2014 (8)
 Jun 2014 (5)
 May 2014 (5)
 Apr 2014 (3)
 Mar 2014 (4)
 Feb 2014 (8)
 Jan 2014 (10)
 Dec 2013 (10)
 Nov 2013 (4)
 Oct 2013 (8)
 Sep 2013 (6)
 Aug 2013 (10)
 Jul 2013 (6)
 Jun 2013 (4)
 May 2013 (3)
 Apr 2013 (2)
 Mar 2013 (8)
 Feb 2013 (4)
 Jan 2013 (10)
 Dec 2012 (11)
 Nov 2012 (3)
 Oct 2012 (8)
 Sep 2012 (17)
 Aug 2012 (15)
 Jul 2012 (16)
 Jun 2012 (19)
 May 2012 (12)
 Apr 2012 (12)
 Mar 2012 (12)
 Feb 2012 (12)
 Jan 2012 (13)
 Dec 2011 (14)
 Nov 2011 (19)
 Oct 2011 (21)
 Sep 2011 (31)
 Aug 2011 (12)
 Jul 2011 (8)
 Jun 2011 (7)
 May 2011 (3)
 Apr 2011 (2)
 Mar 2011 (8)
 Feb 2011 (5)
 Jan 2011 (6)
 Dec 2010 (6)
 Nov 2010 (3)
 Oct 2010 (2)
 Sep 2010 (2)
 Aug 2010 (4)
 Jul 2010 (9)
 Jun 2010 (8)
 May 2010 (5)
 Apr 2010 (4)
 Mar 2010 (2)
 Feb 2010 (3)
 Jan 2010 (7)
 Dec 2009 (9)
 Nov 2009 (5)
 Oct 2009 (9)
 Sep 2009 (13)
 Aug 2009 (13)
 Jul 2009 (13)
 Jun 2009 (13)
 May 2009 (15)
 Apr 2009 (15)
 Mar 2009 (14)
 Feb 2009 (13)
 Jan 2009 (10)
 Dec 2008 (12)
 Nov 2008 (6)
 Oct 2008 (8)
 Sep 2008 (2)
 Jun 2008 (1)
 May 2008 (6)
 Apr 2008 (1)
Stacks Image 18