Club Admiralty

v7.2 - moving along, a point increase at a time

Multilitteratus Incognitus

Pondering what to learn next ๐Ÿค”

ChatGPT to address faculty burnout

A nightcafe AI image creation using the prompt: A professor dealing with digital burnout and attempting to address it.
NightCafe: Professor dealing with Digital Burnout

 It seems like I am operating in Greek Time for #etmooc2 ๐Ÿ˜…. This is a post for the second session of #etmooc which dealt with ethical issues around AI in education (and maybe AI broadly).  Since the third session record has not been posted yet, I'll make the argument that I am still "on time" for some commentary on this ๐Ÿ˜‚. This post has been in my drafts for 2 weeks now.  With the end of the term upon us, I guess I have a little more time to process some thoughts.

In any case, when I started to put together this blog post a couple of weeks ago, I was signed up for a symposium (or mini-conference?) hosted by a notable LMS provider, and of course, AI was the hot topic du jour.  One of the sessions that I attended was hosted by instructional designers, using ChatGPT, to assist in getting started with course development. Especially if that course development came at the 11th hour when an adjunct finds out that they are teaching a course next week! eek!

In this scenario, an adjunct is asked to create an "INTRO101" course that starts in 2 weeks. As an adjunct, you need to create a syllabus, rubric, discussion, lecture notes, and quizzes. It's noted in this case that this course design may be on top of your current full-time work, so time is at a premium (as is probably the case with most adjuncts).  As someone who's worked as an instructional designer, I appreciate the "let's make the best of it" kind of approach to this, but using generative AI (gAI) to ameliorate a busted system is really problematic! It's also instructional design 101! As part of a needs analysis, faculty don't necessarily need a faster/more efficient way to course design, but rather they need the systematic issues to be addressed.  For example:

1) If the course is an INTRO101 course (not an actual course, just a placeholder for a common course, required of majors, and usually taught by many instructors), the adjuncts should not need to scramble to put together course materials for this course.  The readings, activities, assessments, and rubrics for such a course should already exist, and an instructor should walk in and "turn on the lights" metaphorically.  They should be free to adjust the settings and the materials, but ultimately these should be minor tweaks, not major development work!

2) Hiring for upcoming semesters should not be a "shit's going down in two weeks! Let's roll!" kind of moment. Adjuncts should know at least a few months in advance if they are likely to have a course, and they should be compensated for any additional training required, and any prep they do outside of their contractual period.

3) Faculty are hired as subject experts.  Assuming that someone is hired for their specific subject knowledge (and they're not just another warm body), having gAI create outcomes, course outlines, and assignments (the entire skeleton of a course) really takes away from that subject expertise and starts to frame a course in a way that takes away human agency.  Humans fill in the gaps in what the gAI produces but don't necessarily question it. How do I know?  We see this already with publisher-provided course materials.  I've had more than one professor in my time as a learner that basically used the slides from Publisher, and structured the modules around the textbook, and basically used assignments around the text. Throw in products like My___Lab, and you've removed agency from a human but making it convenient to just go with it.

4) Finally, the material that a gAI produces aren't that great. I have downloaded some ChatGPT outcomes and activities for a class I used to teach (future post), but for this one I'll focus on an experiment I ran with ByteAI. I asked it to give me eight learning objectives for a course titled "Artificial Intelligence in Human Education, Development, and Society." This is what it produced: 
  • Understand the basic concepts and principles of artificial intelligence and its applications in human education, development, and society.
  • Analyze the ethical and social implications of using artificial intelligence in education, development, and society.
  • Evaluate the potential benefits and limitations of using artificial intelligence in education, development, and society.
  • Develop critical thinking skills to assess the impact of artificial intelligence on human learning, development, and society.
  • Explore the different types of artificial intelligence technologies and their applications in education, development, and society.
  • Identify the challenges and opportunities of integrating artificial intelligence into educational and developmental practices.
  • Develop practical skills in designing and implementing artificial intelligence-based educational and developmental interventions.
  • Collaborate with peers to design and present a project that demonstrates the use of artificial intelligence in education, development, and society.

If you'll notice, the objectives are basically like fill-in-the-blanks.  The same starting text can be used for any kind of course. As an aside, it would be interesting to see what it does with other course titles, so I might feed ByteAI some course titles from various university courses. Maybe subject to further experimentation.

Next, I asked ByteAI to provide me with three substantive graduate class assignments for a course titled "Artificial Intelligence in Human Education, Development, and Society." This is what it gave me: 
  • Research Paper: Students will be required to write a research paper on a topic related to the use of artificial intelligence in human education, development, and society. The paper should be based on a critical analysis of the literature and should include a discussion of the ethical and social implications of using artificial intelligence in these contexts.
  • Case Study Analysis: Students will be required to analyze a case study that involves the use of artificial intelligence in human education, development, and society. The case study should be selected from a list of examples provided by the instructor or chosen by the student with instructor's approval. Students will be required to analyze the case study from multiple perspectives, including ethical, social, and practical considerations.
  • Project Proposal: Students will be required to develop a project proposal that involves the use of artificial intelligence in human education, development, and society. The proposal should include a detailed description of the project, including its goals, methods, and expected outcomes. Students will be required to present their proposals to the class and receive feedback from their peers and the instructor.
Again, this is very much a magnetic poetry/fill-in-the-blank kind of output IMO. This kind of output can apply to any class.  Just change "Artificial Intelligence in Human Education, Development, and Society" to your course title ๐Ÿ™„

Anyway, that's what I have for session 2 of #etmooc2. If I have the energy, I'll go through the output of ChatGPT to see what it thinks about what my Intro to Instructional Design & Educational Technology should be ;-)
Stacks Image 20

ETMOOC2 Session 1 Ponderings - Part III (the outtakes)

Me in AI generated Star Trek themed image
Me again, anime-style AI (or at least what
nightCafe thinks I look like in this setting)
Alright, so here's my final post on session one of #etmooc2.  You can find post 1 and post 2 on this blog. For this post, I thought I'd post some prompts and responses from my playing with ChatGPT.  Fair warning, I tried to write a profanity-laden email (it sounds badly written IMO, but still has lots of profanities). I tried to channel the r/antiwork subreddit. I guess a content warning is appropriate.

Here are some more ChatGPT prompting...about me. I have underlined all of the information I think is wrong This can be wrong either to a small extent - i.e., it's exaggerated; or to a large extent - i.e., it's factually wrong.

Prompt: Please give me an author biography for Dr. Apostolos Koutropoulos that is 200-300 words

Attempt 1:

Dr. Apostolos Koutropoulos is an educator, researcher, and advocate for open education and open technologies. He was born in New York City and raised in Greece, where he developed a passion for teaching and learning. He earned his Bachelor's degree in Education from the University of Athens and later moved to the United States, where he completed his Master's degree in Education at the University of Massachusetts Amherst.

Dr. Koutropoulos went on to earn his Ph.D. in Educational Technology from the University of Massachusetts Boston, where his research focused on open education and open technologies. He has since become a leading voice in the open education movement, advocating for the use of open educational resources (OER) and open-source software in education.

In addition to his work as an educator and researcher, Dr. Koutropoulos is also an active member of the open-source community. He has contributed to several open-source projects, including the Moodle learning management system, and has organized and participated in numerous open education events and conferences around the world.

Currently, Dr. Koutropoulos serves as the Director of the Online Learning and Instructional Design program at the University of Massachusetts Boston. He continues to research and promote the use of open education and open technologies, and his work has been published in several academic journals and presented at conferences worldwide.

Attempt 2 ("regenerate response")

Dr. Apostolos Koutropoulos is an educator and researcher in the field of instructional technology and e-learning. He holds a Ph.D. in Educational Psychology from the University of Connecticut and has taught at various institutions including the University of Massachusetts Amherst, the University of Illinois Springfield, and the Open University of Catalonia.

Dr. Koutropoulos is known for his work in open education, specifically in the areas of open educational resources and open online courses. He has been involved in various open education initiatives and has authored and co-authored several research articles and book chapters on the subject.

Aside from his work in open education, Dr. Koutropoulos has also conducted research on the use of technology in education, including the design and implementation of online learning environments and the use of mobile devices for learning. He has published extensively on these topics in peer-reviewed journals and presented his work at various national and international conferences.

Dr. Koutropoulos has also been involved in various projects related to educational technology, including the development of an online learning management system and the creation of multimedia educational resources. He has been recognized for his contributions to the field of instructional technology, receiving awards for his work from organizations such as the Association for Educational Communications and Technology and the Sloan Consortium.

Overall, Dr. Apostolos Koutropoulos is a highly respected figure in the field of instructional technology and e-learning, known for his contributions to open education and his research on the use of technology in education.

Prompt: If I were making an RPG character sheet for Dr. Apostolos Koutropoulos, what would his stats be?

As an AI language model, I don't have access to Dr. Koutropoulos' personal attributes or abilities. However, based on his professional background and accomplishments, here are some potential stats for an RPG character sheet:

Intelligence: 18 - Dr. Koutropoulos has a Ph.D. in Educational Psychology and has conducted extensive research in the field of instructional technology and e-learning.

Wisdom: 16 - Dr. Koutropoulos has experience teaching at various institutions and has been involved in numerous open education initiatives, demonstrating practical knowledge and insight.

Charisma: 14 - Dr. Koutropoulos has presented his work at various national and international conferences and has received awards for his contributions to the field, indicating a level of influence and respect.

Dexterity: 10 - While not specifically related to his professional background, this score represents average physical agility and coordination.

Strength: 8 - Again, not related to his professional background, but this score represents below-average physical strength.

Constitution: 12 - Dr. Koutropoulos has been recognized for his contributions to the field of instructional technology, indicating a level of resilience and perseverance.

Of course, these are purely hypothetical and subjective scores based on one interpretation of Dr. Koutropoulos' professional background. An actual RPG character sheet for Dr. Koutropoulos would depend on the game's mechanics and rules.

Prompt: If Dr. Apostolos Koutropoulos were a character in a collectible card game, what would his stats, special abilities, and character history be?

Card Name: Dr. Apostolos Koutropoulos


  • Intelligence: 90
  • Wisdom: 80
  • Charisma: 75
  • Dexterity: 40
  • Strength: 30
  • Constitution: 50

Special Abilities:

  • Open Education: Dr. Koutropoulos can create and utilize open educational resources to enhance his abilities and those of his allies. When he is played, he can draw an extra card from his deck.
  • E-Learning Expert: Dr. Koutropoulos can use his expertise in e-learning to gain an advantage in battle. When he attacks, he can deal extra damage to opponents with lower Intelligence or Wisdom stats.
  • International Recognition: Dr. Koutropoulos' contributions to the field of instructional technology have earned him international recognition. Once per turn, he can draw a card from his opponent's deck and add it to his own hand.

Character History:

Dr. Apostolos Koutropoulos is a renowned expert in the field of instructional technology and e-learning. He has spent his career researching and promoting open education, creating and utilizing open educational resources to benefit learners around the world. Dr. Koutropoulos has also been involved in various projects related to educational technology, including the design and implementation of online learning environments and the development of multimedia educational resources. His contributions to the field have earned him international recognition and numerous awards. In battle, Dr. Koutropoulos utilizes his knowledge and expertise to gain an advantage over his opponents, using open education and e-learning to enhance his own abilities and those of his allies.

โš ๏ธ๐Ÿšซ๐Ÿšจ profanity ahead!โš ๏ธ๐Ÿšซ๐Ÿšจ 


Can you please write this in a less angry and more professional voice. It's going to my manager: "Hey Mike!  You know, you're just one increadibly dull fuckwit who has absolutely no idea how to do his job, you don't give a fuck about your employees, and you're just looking our for yourself you selfish prick!  In all seriously, I wonder how this restaurant hasn't been shut down by the health department; because if we treated the customers like you treat your employees, the food would be a shitty health hazard!  So as to retain what little I have of my mental and physical health, I say fuck you to you, fuck your "mandatory overtime" that's not actually overtime because you pay us shit - which by the way is against employment laws. I QUIT."


Dear Mike,

I hope this message finds you well. I wanted to discuss some concerns I have regarding my employment with the restaurant. I have been feeling increasingly frustrated with the lack of support and attention to employee needs, and I believe that some changes need to be made in order to create a positive work environment.

One area of concern for me has been the issue of mandatory overtime. I have noticed that this is a frequent occurrence, and I believe that it is important for us to follow employment laws regarding fair compensation for overtime work. Additionally, I have felt undervalued and underpaid for the work that I do.

I would appreciate the opportunity to discuss these issues with you further and work together to find solutions that benefit both the restaurant and its employees. Thank you for your time and attention to this matter.

Sincerely, [Your Name]


So...what are your observations of these outputs?


Image prompt for NightCafe, used in this post: please create a character card for a star trek themed collectible card game where the character's name is Dr. Apostolos Koutropoulos

Stacks Image 20

ETMOOC2 Session 1 Ponderings - Part Deux

NightCafe generated image of me in star trek setting
Another me in a Star Trek setting, by NightCafe

This is a continuation of my previous post from the other day. I didn't want to leave things in all negative terms, so here's part II with some thoughts on how AI might be used (or at least areas of AI that I am warming up on).  This isn't a posting about the current state of AI, but rather a 5 (or 10) year look out.  This is mostly inspired by a recent tweet by Tim Fawns, who asked folks to think not about the just present, but the near future.

So...with that in mind, here are some use-cases that I can think of (some of which have been borrowed and adapted from the first #etmooc session).

Use Case 1: Getting your biography starter pack from ChatGPT. 

I like writing.  I don't like writing about myself.  It feels very toot your own horn-like, and I've always never liked those people.  I acknowledge that to get ahead in life, and in academia, you have to do some of that self-promotion. Still, I don't like writing about myself. So, I can foresee using something like ChatGPT to give me an author biography for me, something that is between 200-300 words because I hate writing those.  I could feed it my academic CV, and it could trawl the internet to see what's been said about me (including LinkedIn testimonials), and it could produce something that I could use whenever I need a bio.  This is something that doesn't work well at present time.  I have asked ChatGPT to give me a bio and it just makes stuff up. I've asked it a few times over the last two months, and it still gets things wrong about me. Could I train it to have the correct info? Probably.  Do I want to? No!  The most recent wrongisms about me will be posted in Part III of Session 1 ponderings.

Use Case 2: Generating a character sheet for an RPG. 

I can foresee AI being used as a way to create a randomized character sheet though (or a character sheet based on a made-up bio from example 1). This kind of character sheet can be used in gaming, but it can also be used by students throughout the term if we want to make the classroom a kind of RPG experience.   Another use, blending use-case 1 and use-case 2 is the creation of a CCG (collectible card game) card with you as the character.  Think of this as a kind of playful business card.

Use Case 3: Using AI to change the format of a paper.

I usually write with APA in mind. Yes, I know that I can use a citation manager like zotero to update all my citations on the fly, but that doesn't account for other elements (like spacing, use of footnotes, making all footnotes endnotes, and vice versa, and so on).  Citation managers also ask you to do A LOT of work up front to make sure that your citation data isn't garbage, whether you end up using those sources or not. I could foresee a future where AI processors are "smart" enough to take a paper written in one format (APA, MLA, IEEE, ACM, etc) and convert the entirety of the paper into another format.

Use Case 4: De-escalating writing

We sometimes write emails when we're mad or upset.  It's usually not a good idea to send those. Maybe you need some time to cool down.  Perhaps a use of AI could be to put some of those emails in a penalty box for review after some time has passed.  GMail gives me nudges for follow-up emails, so why not a nudge to check to see if I really really really wanted to send that nastygram? Or, perhaps Grammarly (or ChatGPT) can help me rephrase it or edit it to make it sound more professional but still register one's discontent with the situation.

Stacks Image 20

ETMOOC Session 1 Ponderings

An image generated with AI generator "nightcafe"
Me in a Star Trek-themed anime AI image

Just as session 2 of #etmooc2 is scheduled for this evening, I just caught up with the first session over the last few days. The recording can be found here, and it's funny that it took me 3 days to complete.  Part of it was because I could only really do 20-minute increments (with notes and reactions), and part of it was because I paused to experiment with things mentioned.

Part of the session was really dedicated to identifying ways in which this kind of technology can help with what we do.  Essentially flipping the script and going from "ZOMG! ChatGPT is used for cheating" getting to "how to use ChatGPT to help us with learning?" 

There were a number of examples used in this brainstorming session which present for red flags for me.  I did think of a few examples of my own that may (or may not) be good examples of what you could use tech like this for.  I'll start with my concerns though.

Example 1: Using ChatGPT to grade. This is a use case of having a kind of machine-human collaboration. It was acknowledged that the machine can't really accurately grade everything, and the instructor should have a look-over to correct or supplement the machine output, but this would be a potentially revolutionary use of this tech.  I'm not convinced. First, I have issues with both feeding student submissions into such a system without appropriate guardrails for student submissions. We've seen, from past actors in this space, that they take student submissions and appropriate them for their own use.  Students should not have to consent to having a "smart" machine grade their submission as part of their learning experience.  My second issue here is that such machine grading takes away Teacher Agency to some extent, and it may be taken away as a means to being more "efficient" or "less burned out." Teachers and course facilitators are in a classroom for a reason. If grading submissions is becoming an issue it's important to interrogate why they are becoming an issue instead of throwing some LLM at it.

Example 2: Continuing on with that thread of human-machine collaboration, when working with ChatGPT it's like you're working in a group, but instead of other humans, your team-mate is an AI.  Maybe if AIs were like Mr. Data on Star Trek, I might have a different opinion.  Right now LLMs are like dumb appliances.  They can "learn" but they are essentially machines. Collaboration requires agency, scope definition, goals, and drive, which machines simply do not have.  In Connectivism, you can have interactions between human and non-human/appliance nodes, but I would not go so far as to say that they are collaborating. It's not even a one-sided "collaboration" for the human in that equation.  When you're collaborating in a team you don't have to fact-check your team-mates submissions.  You can have sufficient overlap between areas of expertise so as to have more than one pair of eyes on claims made, and people who are more expert at something can ELI5 things to other team members, but ultimately there is a back and forth.  In a human-machine "collaboration" you have the issue whereby the human needs to be an expert in the subjects to be able to know where the machine goes wrong and correct it.  In a learning context, I think that this is potentially detrimental to the learning process.  It's not the knowledge navigator future we've dreamed of - at least not yet.

The question that came to my mind is this:  why are some folks thinking of LLMs as a "collaborator" and not looking at Google search as a collaborator?

Example 3: OK, final critique here.  One of the things I've heard over the last few weeks is something along the lines of: if you are a good prompt engineer you can get some amazing information, which you have to fact-check."   There are just too many conditionals here to be that kind of study buddy that was mentioned above in example 2. Now, this reminds me of my undergraduate days when I learned about library databases and how to search for resources using Boolean logic. Yes, you needed to play around with your logic and your search terms (and sometimes you needed to learn controlled vocabulary), but you got actual sources that you could read and evaluate (and cite).  I think prompt engineering is less of a sign of things that learners need to learn and more of a sign of a system that is still half-baked ;-). That said, I come back to the fact that you need to know how things work in order to assess whether the output is of any use (or even factual).  An example that comes up is people learning another language.  You would write something in English (assuming English is not your native language) and pop it into an LLM to have it convert that output to something more "native sounding." Amongst other issues, it's useful to know why something sounds more correct than other options when you're learning a language. An LLM could do it for you, but that doesn't help you progress as a learner. We had an example of why it's important to know your stuff (even if machines help) in Star Trek Picard this season. The short version is that the ship's captain is brought to sick bay with some symptoms. The veteran doctor realizes that he has internal bleeding that the younger doctor's medical imaging devices failed to catch. If the veteran doctor didn't know her stuff, the captain would be dead ๐Ÿคท๐Ÿปโ€โ™‚๏ธ

Anyway, this post is getting too long, so I'll save my ideas for using ChatGPT/AI for another post ;-)



Just for documentation purposes, here are the objectives of the first session:

By participating in the synchronous Zoom session and any additional activities you pursue as part of your own learning experience, you will see how your colleagues are responding to ChatGPT. By the end of the live session and completion of any other activities you pursue, you will be able to:

  • Identify at least three ways ChatGPT might be of benefit to you and those you serve in your section of our lifelong learning environment
  • Anticipate at least three challenges ChatGPT may pose to you and those you serve
  • Describe at least one way you may begin incorporating ChatGPT into your work or describe at least one step you can take to overcome a challenge you face in incorporating ChatGPT into your lifelong learning efforts

Stacks Image 20

Assessment in a Generative Connectivist Future

image of me (from what in the day) in a pondering pose

Hey!  Blogging two days in a row! Who would have thunk it?  Well, I did tell Dave I'd read his post, and it did move some gears in the ol' noggin' so I guess something is still working upstairs ;-)

I think you should go and read Dave's post, since I'm just going to reflect and react on a few select points. Dave introduced me to an article by Schinske and Tanner (2014) where they describe four purposes for assessments, those purposes being feedback, motivation, student ranking, and the objective evaluation of knowledge. 

There were two things that jumped out at me: (1) the objective evaluation of a learner's knowledge and (2) ranking learners.  

From a philosophical perspective, I don't think that it's the instructor's job to rank learners. As an instructor, I am not there to indicate whether Tom is marginally better at X than Dick. My job is to help learners go get through a particular corpus of knowledge that they should be able to use to do something with. This type of assessment only really exists on a one-to-one basis. As an undergrad I had a professor, let's call him Hans, who really believed in the bell curve. ๐Ÿ™„ On the first day of class, he announced that there would be so many As and most people would fall in the B/C range.  I don't know what his feelings or beliefs were about the other end of the bell curve (the D and F), but I don't know if we ever found out. The knowledge that no matter how well you do, you are ranked against others is demotivating. If I know that my grade is most likely going to be in the B-range, I'll most likely nope out of most things in class and strategically choose what to participate in. If I were a student in an AI-world (assuming the AI generation was worth anything) I'd most likely be tempted to just autogenerate whatever garbage since assessments were more about a belief that anything is actually useful. As an aside, I still, to this day, wonder what a belief in a statistical probability chat is ๐Ÿคทโ€โ™‚๏ธ. As an aside, as an instructional designer, I also must have missed the cID lass where it was my job to help devise assessments to rank people, instead of actually...assessing their knowledge and application of that knowledge ๐Ÿคฃ

The other thing that jumped out at me was the objective evaluation bit. The more time I've spent teaching, the more I've come to the conclusion that I cannot objectively evaluate the entire corpus of the class I teach.  Well, I could, but it would take a very (very very) long time.  Instead, what I've observed happening is that we use snapshots of concept evaluation as a proxy measure for the entirety of the corpus of knowledge that we try to cover in our classes. We pick concepts that may be more "important" than others, or concepts that can be used like key puzzle pieces so that students can fill in that negative space with concepts and knowledge that's adjacent to what we're testing. Ultimately one cohort of CLASS101 is not going to be evaluated the same was as another cohort from CLASS101.

This reminded me a little bit of a conversation I had with one of my doctoral professors at Athabasca.  We were discussing comprehensive exams at the Master's level.  He was telling me that AU moved from a comp exam to a portfolio because, ultimately, their comp exam was neither comprehensive nor an exam.

In any case, back to course design. Dave writes that the internet (over the past 30 years) has changed the educational landscape. The way  I see it, these represent some different eras of the web. Here's what Dave wrote (a bit paraphrased and expanded) - learners have...

  • The availability of connections to other people to support learners in their education journey - examples being SMS, group chats, and various IM chat clients (Yahoo, ICQ, MSN, etc.) and so on. I would say that this was my generation of college undergrad. It wasn't everyone who did this (there is a certain amount of access privilege associated with this), but it seemed like classmates were a good source of peer learning whenever we got stuck.
  • The availability of pre-created content available through sites like Chegg and also through google searches. Content that can be used to respond to any existing assessments.  This is just a digitalized version of the ol' sneaker net of years past where an someone who had taken the course before could share an exam with others. This was the mode of concern up until the ChatGPT paranoia hit in early 2023. 
  • The availability of generative systems that can create responses to assessments, whether they are "correct" or not. This is where we are now with things like ChatGPT

While reading Dave's original post, I was reminded of conversations about connectivism over the past 10+ years. This is, in fact, an instantiation of connectivism. We connect to human and non-human nodes, "learning" seems to be residing in non-human appliances, and decision-making is itself a learning process. This last point I want to focus a little bit on because I think it has implications for design, teaching, and of course - assessment. If we take decision-making as the center of our learning experience, what kind of content is our sine qua non (SQN) content? These are the minimum elements that we need to begin to make decisions, and also allow us (as learners) to unravel connections to other learning that we need to do. Dave writes that "with the plethora of options for students to circumvent the intent of our assessments this require the rethinking of the way we design our courses."  I agree.  The question is what is that the core of that learning experience?  Not necessarily content (although it is important to some extent), but rather the ability to be lifelong learnings, get new inputs, assessment, and perhaps use them to make decisions for an ever-evolving ill-formed set of problems that come our way in our personal and professional lives.


Stacks Image 20

Experimenting with NightCafe

 Another AI-based image tool shared in ETMOOC that I thought I would try out.  This one is called NightCafe and it creates images based on a prompt and a particular style from its styles list.    My prompt for this one was: Show me a small group of Greek students huddled around a cafe table, drinking caffeinated beverages, while vigorously discussing philosophy.

It's interesting that in my mind I had "college students" but these images remind me a lot of Rennaissance Italy (Assassin's Creed II time period) rather than a more modern Athenian Cafe with Freddo Cappucino and Frappe coffees...

NightCafe AI image - College students
Stacks Image 20

To catch a supposed plagiarist

Statler and Waldorf from the Muppers

I don't often read IHE, but when I do it usually bubbles up from my Twitter feed ๐Ÿ˜‚. The gem that popped up this morning is one professor's lament about how ChatGPT bested him and his Critical Pedagogy practices. While I am happy that someone's attitudes have been adjusted by this experience, I was surprised to read, near the end of the opinion piece that he was familiar with at least some of the principles of critical pedagogy...๐Ÿ™„.  Getting sucked into the paranoia of "Cheaters! Cheaters everywhere!" doesn't sound like someone who's been practicing critical pedagogy for very long.

Anyway - I thought I'd share some of my reactions to the article which I jotted down as I was reading it...(sometimes  I feel like these would be better as TikTok Takes ๐Ÿ˜‚)

"I shared with colleagues that โ€œAll we have to do is ask โ€˜Did you write this?โ€™โ€ and then copy and paste the student work into the prompt box."

My first question is: did you actually research how ChatGPT works? With follow-up questions about the ethics of using a student's submission to feed to the system? And...why did you share with your colleagues that you're doing this experiment as if it's a done deal? I mean, I'd personally run the experiment first and see what happened before broadcasting something potentially embarrassing...๐Ÿ™„

"My inkling was that somebody created a shared study guide using ChatGPT and then, one by one, students took pieces word for word that they included in their short essays. Was this an issue of plagiarism or crowdsourcingโ€”and is there a difference?"

If you don't have a good and solid definition of academic dishonesty, then that's your first problem. Crowdsourcing is (most likely) not the same as plagiarism (depending on what your definition of plagiarism is).  You really should know for yourself what you're looking for before you go looking for it because you also need to clue in your learners as to what is and is not academically dishonest in your setting.

"While I was wrestling with the haste of my initial response, I received six frantic student emails, each professing their honesty, fear and dismay. Another student stopped by in tears. She rushed out of her house to catch me early in order to explain that sheโ€™d worked really hard on her study guide and the subsequent essay question. Yet another student sent me an example of how ChatGPT took credit for writing an article that we read as a class. By now, I knew that the tools at my disposal were flawed. Those 20 students in my class did not cheat on their essays, despite my confidence in my sleuthing skills."

Congrats my man! You managed to instill in students the fear of God, for no reason. You brought students to tears.  I. Hope. You're. Happy! ๐Ÿ™„ You have no sleuthing skills, I think that's now a given.  Listen, I don't mean to be harsh on this person, but before you accuse anyone of anything, you best get your facts straight (unless of course, you're Fox "news" and have money to burn in courtrooms and settlements). I genuinely feel bad for the students, and I hope you do too!

"I lamented to the students that I had made a mistake. Instead of focusing on student growth and innovation, I invested too much time into surveillance strategies. Instead of focusing on my commitment to developing strong personal relationships with students, I let a robot tell me what I should believe. Instead of trusting my students, I had trusted an online tool."

I think hindsight is 20/20, and I am glad you've learned from this experience, but did you really have to put students through this so that you could learn? Why not start with care and believing students, and working on mentoring rather than trying to catch supposed plagiarists?๐Ÿค”

"The scenario in my class, however, also reminded me of some basic principles of critical pedagogy that we must consider as we are ushered into the new AI era. First, we must trust students, foremost because they are real people. They bring to the classroom their values, livelihoods and traumas. We must be mindful of the ways in which academic surveillance dehumanizes the very people we signed up to nurture"

We've been through this (at least) once before with remote proctoring "solutions" during the three years we were in the height of COVID and people were not in classrooms.  Why did we need to re-learn this lesson so soon?  It begs the question - did we really learn it the first time around? or did we just pay lip-service?

"if I were to continue to subscribe to the belief that itโ€™s an educatorโ€™s job to catch and punish plagiarists, then everyone in the class would lose."

I don't get paid to be on the lookout for potential plagiarists and cheaters.  That's not the job of the faculty member. Yes, I think if you voluntarily find yourself in that position (because you'd have to consent to doing it), it would be a lose-lose proposition. So...why bring yourself into that position to begin with? ๐Ÿค”

Stacks Image 20

A future of couch potatoes

Meme from the Umbrella academy where student is using ChatGPT to do homework, and professor has used ChatGPT to generate homework

I've been a bit "behind" in my participation in ETMOOC 2.0. I've been enjoying keeping an eye on Discord, but I haven't really been participating as much as I would like to.  In a couple of weeks, the semester ends, so mental bandwidth should be freed up a bit ;-).

This past week one of the streams that crossed my little part of Twitter was about teachers using ChatGPT to give feedback to learners on the homework/essays that they've submitted for grading.  I managed to avoid most of this discussion - probably a symptom of having rolled my eyes so hard I almost knocked myself senseless ๐Ÿ˜‚.  

When I stopped for a moment to consider the possibility of this thing being useful for teaching (assuming we put aside any ethical or legal issues that come with uploading a student's paper into this kind of platform), I was reminded of a comic meme that I saw on the ChatGPT subreddit last week (or was it two weeks ago).  While it is more focused on the workplace environment, I think it has applicability to education:

This whole discussion brings a few questions to mind:

  • If we're so bothered by students using AI to do their work, why aren't we bothered by having AI do our work?
  • If teaching is essentially a kind of caring task, doesn't it signal that we don't care about the people who submitted their work to us for comment when we outsource it?  I understand that there is an issue with workload and increasing adjunctification of academia, but AI is not the way to solve this, rather it's leaning into it.
  • If learners need to use AI to write their assignment for them what does this say about our assignments?  Might they be boring? Missing making a relevant connection?  Might it be a workload issue for students? Or, might it be a skills-related gap that prevents them from doing what they need to so? Or all of the above?  Creating more make-work for learners isn't the way to solve this issue.
OK, so you might be wondering where the whole couch potato thing is coming from.  Many years ago, when I was in high school, I had a math teacher (Mr. Erhardt).  He has a banner in his classroom that read "Math is not a spectator sport." In my memory this thing was printed on a dot matrix printer sometime in the late 80s, even though my high school experience was in the mid-late 90s ;-).  In any case, when we did math problems in Mr. Erhardt's class he always pointed to the banner when he wanted to solicit class participation.   When I became an instructional designer and I was thinking about active learning, Mr. Erhardt and his banner came to mind.  

I adapted this by saying that "Learning is not a spectator sport" and I started using the analogy of the course instructor/facilitator/teacher as a personal trainer.  Students go to class for the same reason they go to the gym - they want to get something out of it, but that something only comes when they do the thing that they need to do (push-ups, squats, burpees, 30 minutes on the treadmill, whatever). The personal trainer is there to demonstrate, to spot, and to give feedback so that you do the activities correctly and you don't injure yourself.   Ultimately it's the learner who needs to sweat and feel the burn. If the learner's goal is to watch someone else sweat, then there is a mismatch.  They shouldn't be at the gym.  They should be courtside at basketball games, or a hockey arena, or on the sidelines as people run a marathon.

If learners are using AI to not (metaphorically) sweat, and if we're using AI to not (metaphorically) spot and correct our learners, then why don't we just all agree to be couch potatoes?

Couch Potatoes of AI (Generated by Bing)

Stacks Image 20

Detecting AI "plagiarism" and other wild tales


popular Scooby Doo meme where the professor (Fred) unmasks the evil doer (cheatrers) to find out that the cheaters where the professor's own insecurities
If only it weren't for those darned kids!

Admittedly I haven't been blogging a lot these days. I keep meaning to come back and actually get into the habit of writing more frequently, but as one of my Twitter acquaintances once observed, you make a note of it to come back to, but then lose motivation (loosely paraphrasing Matt Crosslin - I think).  In any case, ร  propos of TurnItIn's announcement this past week that they will now have an AI writing detector and AI writing resource center for educators as part of their offerings (wooooo! /s), I thought I'd spend a few minutes jotting down some thoughts. Warning: I am a  bit of a Dr. Crankypants on this one...

If you haven't been paying attention, the early research is out on these kinds of detector schemes. People have been playing around with ChatGPT and AI author detectors and the results are in. These detectors just aren't good. Even GPTZero "The World's #1 AI Detector" (๐Ÿ™„) isn't all that accurate.  Change some words around, paraphrase, change a bit of the syntactic structure, and boom! Something that was flagged as likely to be AI-generated is no longer marked as AI-generated.  Furthermore, texts that aren't AI-generated, are falsely flagged as AI text for...reasons.  Who knows how the system works? ๐Ÿค”๐Ÿคทโ€โ™‚๏ธ.

All this really rubs me the wrong way, as an instructional designer, as an EdTech person, and as someone who works in higher ed.  For reasons that I won't go into too much detail, my work requires students to submit their capstone examinations through TII.  The capstone examination is a culminating analytical essay (with citations and appropriate argumentation) that is undertaken once students have completed their MA coursework. You must pass this exam to earn your MA. As my department's EdTech Guru Extraordinaire, I administer this exam twice a year, so I get to set everything up in the LMS. 

The capstone exam is a stressful time in the lives of students because even though many complete their courses with flying colors, they cannot graduate until the capstone essay is deemed acceptable (based on the assessment rubric that is shared with learners ahead of time).  Students have two chances to pass, or they never receive a degree. Even if you are confident in your knowledge (like I was back in 2010 when I took the exam), you still feel the jitters and nervousness.  My hands were shaking going into the computer lab. Back then you had four hours to answer four essay-style questions, so restroom breaks were also on a real "must go!!!" basis. Questions such as What if I don't pass?  What if I fail twice? Two (or more) years wasted swirl through your mind.  Luckily the exam today is a bit different than my days, so students have more time, open books, and more opportunities to write something broader with more authentic prompts.  Still, those questions about "what if I don't pass?" remain.  TII, the run-of-the-mill, non-AI-enhanced variety, still provides false positives. it frikkin' flags the name of the university because it...(wait for it) appears in other students' submitted papers in the institutional repository! ๐Ÿคฆโ€โ™‚๏ธ

Now, as the exam gatekeeper, I view these results before I anonymize the exams for grading. I have to make a call as to whether someone plagiarized, and I always come to the conclusion that  TII is full of ๐Ÿ’ฉ.  On the plus side, because the exam is longer than it was when I was a student, students have access to the report and they can modify/update their submission to get rid of any inadvertent plagiarism (i.e., they forgot to cite something), but some folks get stressed out at seeing a 20% plagiarism report and they've cited everything properly. I'd be stressed out too if I only had two chances at an exam that determined if I would graduate or not and I got these reports!  Now, add to it some black box "likelihood of AI writing this" where you don't know how the mechanism works, and you have no idea how to evaluate such an output. Even though my default stance is to trust the student, it still takes energy to reassure students when they see this kind of junk output...

It annoys me that such "tools" increase stress upon learners, and add to the workload of care that professionals have to add to their schedule to reassure students who are, rightfully,  freaked out.

Stacks Image 20

New Article out: Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI)

black and white drawing of a paperboy holding a newspaper whose cover says "extra! extra!"

This week a new collaborative article was published in the Asian Journal of Distance Education  titled "Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape." Our friend and colleague Aras Bozkurt invited us to participate in a piece using speculative methodology, something new to me, to explore positive, and not so positive, narratives around the use of AI in education. I love collaborating on this type of output because I learn so much both from engaging in the experience as well as from other participants (and there were 36 of us in this endeavor).  It was great to be in the same academic and social mindspace with old friends and acquaintances from past collaborations, as well as work with new folks. The published document is 78 pages long, so a short book if we consider page breaks between stories and perhaps some illustrations, which the original article doesn't have, but someone in our collaborative suggested - and which I think is a nifty idea.  The last big collaboration like this (that I remember) was around Emergency Remote Teaching which I missed out on because of the dissertation work I was doing at the time, so this was a great way to get re-started with collaborative work now that school's done.

In writing my positive and negative stories I tried to be a bit gray in both of them.  Depending on your lens and your predispositions you might see either as a positive or a negative. Either way, I hope they make you think ๐Ÿ˜

I need to do a deeper dive into speculative methodologies.  For this round, I read enough to be conversant and get my brain going, but a  big part was a kind of moshpit learning. This approach reminds me a bit of another project I wanted to start a few years back, one that also stalled due to dissertation work, the "Modern Day Aesop" (or better name TBD) where modern animal-based fables could be crafted with an eye toward teaching, learning and technology.  Similarly, another friend, Lance Eaton, had proposed a book a few years back called "Bring Me My Chisel:  The Resistance Manifesto to the Cyborg Takeover of Academia" which also seemed to engage in a fictionalized approach to discussing teaching, learning, and technology in academia from a (perhaps) Luddite perspective.

Anyway, here are the deets on the new article :-)


While ChatGPT has recently become very popular, AI has a long history and philosophy. This paper intends to explore the promises and pitfalls of the Generative Pre-trained Transformer (GPT) AI and potentially future technologies by adopting a speculative methodology. Speculative future narratives with a specific focus on educational contexts are provided in an attempt to identify emerging themes and discuss their implications for education in the 21st century. Affordances of (using) AI in Education (AIEd) and possible adverse effects are identified and discussed which emerge from the narratives. It is argued that now is the best of times to define human vs AI contribution to education because AI can accomplish more and more educational activities that used to be the prerogative of human educators. Therefore, it is imperative to rethink the respective roles of technology and human educators in education with a future-oriented mindset.

Check it out:

Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., Farrow, R., Bond, M., Nerantzi, C., Honeychurch, S., Bali, M., Dron, J., Mir, K., Stewart, B., Costello, E., Mason, J., Stracke, C. M., Romero-Hall, E., Koutropoulos, A., Toquero, C. M., Singh, L., Tlili, A., Lee, K., Nichols, M., Ossiannilsson, E., Brown, M., Irvine, V., Raffaghelli, J. E., Santos-Hermosa, G., Farrell, O., Adam, T., Thong, Y. L., Sani-Bozkurt, S., Sharma, R. C., Hrastinski, S., & Jandriฤ‡, P. (2023). Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape. Asian Journal of Distance Education. Retrieved from

Stacks Image 20


 Apr 2023 (4)
 Mar 2023 (5)
 Feb 2023 (2)
 Dec 2022 (6)
 Nov 2022 (1)
 Sep 2022 (1)
 Aug 2022 (2)
 Jul 2022 (3)
 Jun 2022 (1)
 May 2022 (1)
 Apr 2022 (2)
 Feb 2022 (2)
 Nov 2021 (2)
 Sep 2021 (1)
 Aug 2021 (1)
 Jul 2021 (2)
 Jun 2021 (1)
 May 2021 (1)
 Oct 2020 (1)
 Sep 2020 (1)
 Aug 2020 (1)
 May 2020 (2)
 Apr 2020 (2)
 Feb 2020 (1)
 Dec 2019 (3)
 Oct 2019 (2)
 Aug 2019 (1)
 Jul 2019 (1)
 May 2019 (1)
 Apr 2019 (1)
 Mar 2019 (1)
 Dec 2018 (5)
 Nov 2018 (1)
 Oct 2018 (2)
 Sep 2018 (2)
 Jun 2018 (1)
 Apr 2018 (1)
 Mar 2018 (2)
 Feb 2018 (2)
 Jan 2018 (1)
 Dec 2017 (1)
 Nov 2017 (2)
 Oct 2017 (1)
 Sep 2017 (2)
 Aug 2017 (2)
 Jul 2017 (2)
 Jun 2017 (4)
 May 2017 (7)
 Apr 2017 (3)
 Feb 2017 (4)
 Jan 2017 (5)
 Dec 2016 (5)
 Nov 2016 (9)
 Oct 2016 (1)
 Sep 2016 (6)
 Aug 2016 (4)
 Jul 2016 (7)
 Jun 2016 (8)
 May 2016 (9)
 Apr 2016 (10)
 Mar 2016 (12)
 Feb 2016 (13)
 Jan 2016 (7)
 Dec 2015 (11)
 Nov 2015 (10)
 Oct 2015 (7)
 Sep 2015 (5)
 Aug 2015 (8)
 Jul 2015 (9)
 Jun 2015 (7)
 May 2015 (7)
 Apr 2015 (15)
 Mar 2015 (2)
 Feb 2015 (10)
 Jan 2015 (4)
 Dec 2014 (7)
 Nov 2014 (5)
 Oct 2014 (13)
 Sep 2014 (10)
 Aug 2014 (8)
 Jul 2014 (8)
 Jun 2014 (5)
 May 2014 (5)
 Apr 2014 (3)
 Mar 2014 (4)
 Feb 2014 (8)
 Jan 2014 (10)
 Dec 2013 (10)
 Nov 2013 (4)
 Oct 2013 (8)
 Sep 2013 (6)
 Aug 2013 (10)
 Jul 2013 (6)
 Jun 2013 (4)
 May 2013 (3)
 Apr 2013 (2)
 Mar 2013 (8)
 Feb 2013 (4)
 Jan 2013 (10)
 Dec 2012 (11)
 Nov 2012 (3)
 Oct 2012 (8)
 Sep 2012 (17)
 Aug 2012 (15)
 Jul 2012 (16)
 Jun 2012 (19)
 May 2012 (12)
 Apr 2012 (12)
 Mar 2012 (12)
 Feb 2012 (12)
 Jan 2012 (13)
 Dec 2011 (14)
 Nov 2011 (19)
 Oct 2011 (21)
 Sep 2011 (31)
 Aug 2011 (12)
 Jul 2011 (8)
 Jun 2011 (7)
 May 2011 (3)
 Apr 2011 (2)
 Mar 2011 (8)
 Feb 2011 (5)
 Jan 2011 (6)
 Dec 2010 (6)
 Nov 2010 (3)
 Oct 2010 (2)
 Sep 2010 (2)
 Aug 2010 (4)
 Jul 2010 (9)
 Jun 2010 (8)
 May 2010 (5)
 Apr 2010 (4)
 Mar 2010 (2)
 Feb 2010 (3)
 Jan 2010 (7)
 Dec 2009 (9)
 Nov 2009 (5)
 Oct 2009 (9)
 Sep 2009 (13)
 Aug 2009 (13)
 Jul 2009 (13)
 Jun 2009 (13)
 May 2009 (15)
 Apr 2009 (15)
 Mar 2009 (14)
 Feb 2009 (13)
 Jan 2009 (10)
 Dec 2008 (12)
 Nov 2008 (6)
 Oct 2008 (8)
 Sep 2008 (2)
 Jun 2008 (1)
 May 2008 (6)
 Apr 2008 (1)
Stacks Image 18