Multilitteratus Incognitus
Prompting is the Problem (?)
%21-%04-%2026, %R AI, AIEd, EdTech, LLM, magazine, snakeoilIn a recent(ish) issue of TD, I came across an article titled "Prompting is the problem," which piqued my interest (edit: archive link didn't work - saving placeholder in case I find a solution).
I think there are pros and cons here, so I don't want to dwell on just the negative. For example, the author writes that...
"Here's an inconvenient truth: Because AI systems are probabilistic and in motion, the same request won't always give the same answer—and the same model won't behave the same way month to month. Research demonstrates measurable behavioral drift across major large-language-model providers."
"That scenario illustrates why syntax-first prompt training backfires. When we teach people to chase consistency from a tool designed for variation, the result is hesitation, not value. Rather than treating AI prompting as the skill, start teaching people how to think with AI—iterate, question assumptions, and use conversation to move the work forward."
My gripe here is that for the last year (or two...or three) the blame about LLM outputs has been resting solely on the shoulders of end-users. "Oh, you're doing it wrong," or "maybe if your prompts were better, you'd get better outputs." This drove folks to books and workshops for better prompting, but it's all pretty pointless, IMO, because results are still probabilistic slop. Sure, you may be able to get something that approximates what you had in mind through "better" prompting, but in a teaching and learning scenario, you'd want consistency (I'd argue) over probabilistic outputs.
The author claims that...
"AI systems are valuable because they show users their own thinking back. The real skill is metacognitive. Instead of users stopping at what the tool gives them, they should push it to help them see what they missed. That is the antidote to cognitive offloading. Users should actively engage with the AI model's outputs to strengthen their own thinking. Every output is a chance to challenge a user's defaults, surface blind spots, and strengthen the individual's reasoning."
There are two threads here. In one thread, I feel like the author is trying to get a bit Vygotskian here and treating the LLM like a more knowledgeable other, which they are not. An MKO consciously pushes you just beyond the reach of your own understanding and helps you grow. An MKO, by definition, is more knowledgeable, and you (in theory) don't need to second-guess what they know. You can trust that they are presenting you with current information and know-how, and you can grow your practice. With an LLM, you'd need to know what's already been given to you as an output in order to better assess it, and at that point, what's the point of using this as a learning tool? The other thread is that the LLM becomes a mirror for your thoughts. Cool, I guess, but do we really need a modern-day ELIZA? The evidence, so far, is that LLMs basically become our little yes-men. Is this helpful in learning? If what is being reflected to us is not something that is useful and continues to steer us down the wrong path, is this helpful or detrimental to the learning process.
Finally, the author writes...
"Technique-first training creates exactly the pattern that Your Brain on ChatGPT warns about: People learn to focus on getting the prompt right instead of determining whether the output is valid, complete, or useful."
I think this is the biggest failure in my view. Let's say that I want an LLM to give me a literature review on a corpus of 100 articles. I haven't read any of them, so I don't know if they are useful or quality research, and I wouldn't be able to tell you what the main points are with regard to how I want to use them in my research. To do that, I'd need to do the actual work of reading, assessing, thinking, and cognitively processing those 100 articles. To ask an end-user to determine whether the output is valid, complete, and useful negates the point of having the LLM do the work in the first place.
In the end, it feels like a lot of these articles are just making excuses as to why the technology doesn't work, and how it can work better, rather than writing this tech off completely, or going back to the lab to experiment a bit with it, rather than trying to shoehorn it into everything.
Same old tired narrative: "Classes were built for the 1900s" 👴
%06-%02-%2026, %R AIEd, courseDesign, GenAI, instructionalDesign![]() |
| Old Timey School House (Lego version) |
I came across a post on LinkedIn the other daaaay (read this in a Letterkenny cadence, if you know what that is 😆). Here's a direct link to that post if you'd like to engage with it and its author.
Over the past couple of years, I've been trying to get my mojo back when it comes to discussing issues like this. For a brief time, we had MOOCs (well, cMOOCs) with a daily recap of what was happening on Twitter using a specific hashtag, blogs, and other places on the web (Downes' gRSShopper, if anyone remembers this). Now things are difused though LinkedIn posts, people's blogs or substacks branded blogs, or on Discord; and there isn't a place that collects this discussion. Anyway, don't mind my "old man yelling at clouds" moment.
So, one of the things that I've been observing over the last decade (or more) is that a tried and true narrative exists any time there is a new technology out there. Namely, that kids these days do the Techy McTechface thingy, and we all need to adapt to Techy McTechface, because Techy McTechface is the future, and we don't want to leave anyone behind, and by doing so, disadvantage learners. Part of this narrative is that we do not simply just need the new thing, but the old is really inefficient, outdated, and plain on boooohhhriiiinnnggg! During this line of argument existing assumptions about why things are the way they are, and what the underlying constraints are, are never examined.
Case in point, the post I linked to above. I came across this one, after George Siemens reposted it on his LinkedIn timeline with a kudos. To be faaaaaair, there are some merits to what's being said, but these critiques are also not new, and the what should be done as we move forward ignores other elements of the environment that shape how we can design and offer instruction. But let me dissect this post a little bit, from a higher education perspective:
Claim (quote):
Most online courses feel like they were built for a different era.
And learners can tell.
The baseline of what constitutes a “good” online learning experience has shifted. When people are used to fast feedback, responsive systems, and support that adapts to their thinking, static course models start to feel archaic very quickly.
Response:
This seems like a false authority falacy to me. Here we are positioning the learners as some kind of expert as to what good (online) learning is. Now, don't get me wrong, we all have our own personal opinions of how we want our course, but as learners we don't know if that's any good. My main analogy here is having a personal trainer at the gym. I can go to the gym and tell you I want to work on my legs (because that's where my strength is and I feel confident), and a personal trainer might help me do some leg reps and push me a bit on that front, but what I might really need is stamina training (to reach whatever goal I have); so a good personal trainer won't have me just working on my leg strength, but would push me toward activities that meet whatever that stated goal is. Just because people are used to something like learner response systems and (what seems like) quick adaptation, doesn't mean that this is the only valid sort of pedagogy.
Claim:
What still shows up in many online courses:
📄 Long, text-heavy pages
📚 Too much content, too little thinking
🖱️ Click fatigue and legacy SCORM packages
💬 Unwieldy discussion forums with little direction
🎥 Live sessions bolted on rather than designed in
📦 Resources that stay frozen year after year
That standard used to pass. But not anymore.
Respose
Again, here, don't get me wrong. Information design is important when designing online learning spaces. Long, text-heavy, HTML pages aren't necessarily great for usability. Having the facility to get an alternative format for this is important. If I can download the page as an MP3 and listen and take notes, great. If I can get a PDF so I can place-shift my learning, also great. The length of the content isn't an issue. I do take issue with "too much content, too little thinking" because it pits two things against each other in a way that doesn't make sense. Thinking, assessment, and processing (and coming to a conclusion) require information. Content in a class is the information. You can't have one without the other. And, having common content grounds the exploration for a cohort of students so that everyone is on the same page. This is not an either/or situation.
Shovelware (click fatigue in SCORM) has been an issue since I was a wee learning design padowan in 2008. We discussed this back then, too. You know what hasn't changed? The operating environment we work in. Organizations want click-and-submit kind of eLearning - for better or for worse. This is mostly for compliance. I don't see a reason why these kinds of packages are useful in higher ed, but I assure you, they do exist, and they are most likely there to break up the perceived monotony on an online course. In a corporate setting, learning designers can push back lightly against death by click, but if the organization only values compliance training, you get the cheapest kind of product to produce.
Live sessions that are bolted on rather than designed in - again, this has been an issue since 2008, and what's made it worse, IMO, is the radical flexibility for learners. If live sessions are optional, they will continue to be bolted on.
Finally, resources that are frozen year after year...is this a problem? It can be, but it might not. There is a nuance here that is lost. If that article from 1975 still makes the point you wanted to student to get, and it still works as showing a foundation for something you're trying to impart, is the date of the article still a problem? On the other hand, if your resouces for eLearning include Adobe Flash...then yes, that's a problem. As an instructional designer, those nuances are important to interogate.
Claim:
Stronger online course designs are moving toward:
✨ Less content, more sense-making
🧠 Experiential tasks that require context specific judgment, reflection & interpretation
🎬 Videos created during the course, not all upfront
🧩 Shorter, more frequent live sessions designed for connection and synthesis
🔁 Resources & reading lists that evolve with questions, interests, and context
🤝 Visible peer thinking through annotation and shared work
🤖 Integrated AI literacy
🎯 Hyper-specific/bespoke learning tools
👥 Peer learning groups for sustained support
🧪 Assessment that values process, revision, and decision-making
The expectations of quality have increased and weak design choices are more visible than they once were.
But this is also an opportunity to design online courses in a more pedagogically deliberately and responsive way.
Respose
Honestly, I'd love to see some citation here for some of these things. Again, I don't disagree with some of the broad strokes; and no, they are not totally new, and we've been discussing this in the field for the last 20 years or so, but what I want to highlight here is the cost associated with some of these things. Creating video content in situ as the semester progresses is very expensive in terms of hours. Any kind of JIT content creation is very, very, time-consuming. Creating evergreen content, ahead of time, is a much more prudent use of time. About 10 years ago, when I first started teaching, I spend A LOT of time creating a weekly course podcast. This would include going through the weekly forums, picking up submissions from students (there were about 120 posts per week), and recording a podcast that highlighted key ideas, contributions, aha moments, and extended the content for the week. I'd also mention things that I thought students missed. I'd record, edit, produce and post an episode every Sunday so students could listen on their Monday morning commute. It was lots of fun, but that was a 5-6 hour commitment. As a point of reference, the institution only pays for about 10 hours of teaching labor each week, so those 6 hours creating that audio content (which didn't have a transcript back then!) took away time from giving people feedback on their actual assignments. As a result, I ended up putting double the time each week in the class. The students appreciated it (according to the evaluations), but it was not worth my time. The same problem exists with bespoke tools. Yeah, cool idea. But institutions subscribe to a certain amount and type of tools, so there's a standardization, both for training and troubleshooting. Anyone that is bespoke falls upon the instructor to be tech support, which again can suck up a lot of time. I paint outside the lines when I teach, but I know that this potentially comes at the cost of my spending a ton of time supporting that tool.
Claim:
Moving from content overload (PDFs, slides, articles) to curated depth and dynamic resources is essential if anything is to stick. Stronger courses focus on fewer concepts explored more deeply, structured comparison, interpretation, and synthesis, & deliberate pauses for sense-making.
Respose
Again, this really depends on context. In an introductory course, or in certain undergrad courses, where the goal is to get people quickly up to speed on some certain basics, yes. You remove the chaff so that students can focus on the wheat. In other contexts, you do want students to struggle a bit with the readings. No one comes to a new discipline ready to read in that discipline, and ready criticaly inquire and analyze. The LLM "era" of the last few years didn't start the content slop, content slop existed before that in the guise of low-quality listicles, unsourced claims on low-quality pages, repeat after repeat after repeat on YouTube of debunked learning theories, and predatory journals that will publish anything for a fee. Learning how to be a critical consumer of information is really part and parcel of being a professional in a given field. Furthermore, the depth and breadth of content is dependent on the intended learning outcomes. Less isn't always more, just as more isn't always less.
Claim:
IMHO this also challenges the rigid, predict-everything-upfront course production model.
We used to spend 6–12 months meticulously planning online courses. Outlining every module, scripting every video, filming and building everything in one intensive production sprint.
The new reality demands a different approach: build, test, iterate, stay responsive.
You can’t always predict what students will need, or what the landscape will look like when they are actually taking the course.
Traditional online course development often confuses thoroughness with quality. You can have a meticulously planned 12-week course that completely misreads what students struggle with in week 3, making everything after that less useful.
The best course designers now work in tight cycles: teach something live, refine it based on real feedback, rework it, then move to the next piece. They build courses that evolve with students rather than aging out before they are finished.
This is better for faculty engagement as well, many of whom hate the design-everything-to-the-nth-degree model.
It is also better for institutions, with lower upfront costs, better resource allocation, faster responsiveness and easier course maintenance.
Respose
Build, test, iterate is a core tenet of good instructional design IMO. You won't get an argument from me, but this has very little to do with the original proposition stated in the first couple of claims of how "courses feel like they were built for a different era." Just because you test and iterate doesn't mean that you don't take your time to plan and execute a project that is as feature-complete as possible when you start. I've seen the 6-12 month figure quoted a lot, and I am sure it's there as rage bait at times 😅. Listen, if faculty had the space to only focus on course development for a sprint cycle, that 6-12 month timeframe would drop drastically. However, faculty (both tenure-track and non-tenure lecturers) have to balance course design with research requirements and committee work. When those things are tallied up into a 40-hour work week (which I take a few in the faculty ranks work), you can see how a course design project balloons in terms of how long it takes to complete, even for well-meaning, well-trained, course designers. I've tried to source this quote (the 6-12 months; elghouth in my experience it's also been seen as a 9-12 month leadtime), and I've fallen short. I may have seen a tangential mention ina Gilly Salmon book (if I remember correctly), but it was more observational or conjecture rather than an actual study.
The other issue with the above arguments is that we don't design bespoke courses for each cohort that comes in each semester. We have courses that fit in a curriculum, and based on that planning, we have some potential garden paths to follow. If someone wants to go out in the wilderness on their own, they can (they are adults), but a course always exists within an ecosystem. A good designer/instructor will have additional maps, paths, materials, and activities for folks who want to expand upon what is learned in the class, but a class isn't a free-for-all. Classes should adapt to a certain extent, but if folks are getting stuck in Week 3, that needs to be investigated as part of that build, test, iterate cycle.
As I wrap up, I also want to share my observation that the plan is nothing, but planning is everything. Planning and designing ahead of time is important to know the depth that you will go to. It also illuminates potential pathways. You may choose to get rid of the plan as you teach, but not doing that upfront work is problematic on a lot of systemic levels. And no, AI won't save you if you don't plan ahead.
Thoughts?
Discord as a discussion forum - initial thoughts from last fall
%10-%01-%2026, %R courseDesign, discord, forums, instructionalDesign, InstructionalTechnology, participation, teachingLast fall, I got to design and teach a course that I've been wanting to teach for a very long time: Language Learning and Technology (or, in other words, Computer-Assisted Language Learning - if you are in the language education field). It was a lot of fun to design, and a good experience to teach. I really enjoy design work (even though I don't get to do it often), and it's been ages since I taught a class that was a regular graduate class; all of my grad courses since 2021 have been Capstone courses, which I've treated mostly like a studio space with peer review. There isn't a lot of "discussion" that happened in that kind of course.
Anyway, the last time I taught a course with regular weekly discussions, we used Blackboard "Classic." I've been using discussion forums on Canvas for a few years now through my OLC facilitation, and before that, I've had experience with a variety of LMS and their associated forum functionality. They are all pretty similar, and honestly, I felt a bit bored by the format, having used this format both as a learner and as an instructor since 2005. Over the years, both as a learner and as an instructor, what I've seen most students do is to jump in, say something by some initial deadline, leave, come back 4 days later, post responses to one or two peers, and disappear again until next week. That's not really a discussion; there's a lack of idea exchange. I've described this behavior like a drive-by shouting: You're in a slow-moving car, you shout something out the window, and you speed away. Meanwhile, the people on the side of the road who are having a conversation are left asking, "what was that about?" When I was new to teaching, I was using a discussion board rubric based on the Middle Tennessee State University discussion forum grading rubric. It was fine, it worked fine, and students got their grade, but over the years, this seems like a lot of work for a discussion forum activity that's worth 10% of overall participation.
What did I want to discussion to be like? I wanted it to be more "of the moment." If students were reading something in the weekly readings, I wanted them to be able to come online and post something that intrigued them, even if they had not read ALL the readings just yet. I wanted this facility to be in their pocket, so if they had their phone there while reading (even if they didn't have access to a laptop for Canvas), they could just share multimodal content with others. I also wanted them to share what they found with others in a more drip fashion. Found a relevant podcast episode, article, or resource? Why not share it with others? Finally, I wanted folks to feel like the discussion forum was more of a watercooler rather than a space where they felt compelled to be twice per week. Given all this, I decided to give Discord a try. Yes, I know that Canvas has a mobile app, but while their UI was innovative for 2011 when I first encountered it (and also in comparison to other LMS providers of the time), it feels a bit stale today and not conducive to what I wanted it to do.
Here were some parameters for discussion forums each week:
- Students needed to "log in" five days per week and post something.
- This means that they needed at least five posts per week.
- The posts could be anything relating to the course, I'd also accept a post per week that was off-topic (e.g., share a funny photo of your pet, a recommendation for a TV show, or whatever builds community).
- Students would need to participate in the weekly topic, but if there was a week that just didn't resonate with them, they could lurk in that channel, read what others wrote, and post in other channels.
Have you used an alternative to the discussion forum (that isn't Voicethread)? What do you think?
Rolleyes... LLM edition
%03-%01-%2026, %R AI, AIEd, LLM, microsoft, trainingIt's the winter break, and now that I have some actual downtime, I decided to do some Microsoft training. I think the last time I had the mental space to do any of this on the Microsoft Education was sometime in 2021 (if the last badge earned is any indication). Anyway, I went through the course offerings to see what's on tap at Microsoft, and I came upon a whole load of AI-related things. Cool. While I've been paying attention to this whole AI thing, I haven't really paid that much attention to what corporate training is saying about their products (and how they might be used).
I've seen some colleagues post their badges on LinkedIn, so I thought I'd also follow the AI for Educators learning path on Microsoft Education to get conversant with what others on my campus are experiencing through these trainings.
Now, AI has been touted as a time saver on a variety of fronts, a claim that I think has yet to pan out. As I was going through the AI for Educators training, the following use case scenario was presented:
An educator in a secondary class needs to create a rubric for an upcoming writing unit in Greek mythology, then write an exemplar response for learners to follow. Facing what could be hours of work to complete, the educator turns to an AI tool and starts to enter in the prompts. The educator starts with the rubric. They turn to Microsoft 365 Copilot Chat and paste in the state standards and description of the upcoming Greek mythology writing unit. Then, they ask Copilot Chat to create a 20-point rubric including all the information they pasted. It’s completed, but it’s not quite what they were expecting. After interacting with Copilot Chat with a few more clarifying prompts like “Make the wording better for a 13-year-old,” the educator has the rubric. Finally, the educator asks the AI to write an exemplar response based on the rubric it created.
It should be pointed out that the typical learner age is in a K-12 setting in the examples that microsoft training gives, but the example above reiterates this. This type of scenario isn't unique in its over-the-top-ness. I've seen other similar use cases given as examples elsewhere. The thing that all of these scenarios have in common is a late-90s or early 2000s infomercial, where the exasperated user tries to do something "simple" (like drain pasta in a colander) only to have the task explode in their face (or have the pasta all over the sink and going down the drain, as was the case with the Pasta Pro Cooker, or whatver is happening with the following tupperware containers 😂
Anyway, I would expect that if someone is preparing lessons and activities based on subject matter that they know, they shouldn't need an LLM to create this stuff for them. Furthermore, why in God's would you outsource the rubric creation? Don't you know what you want out of the activity? And, why would you want the LLM to create an exemplar response? Why should novice learners emulate what an LLM produces? This all just seems highly sus to me 😒.
ChatGPUghs...and LLNos...
%30-%12-%2025, %R AI, AI-Slop, AIEd, GenAI, LLM, teaching![]() |
| Academic AI-Slop (ChatGPT Produced) |
Happy end of the semester, and almost the end of the calendar year!
Alright, I'll own up to it. The title of this post probably doesn't hit the mark ;-)
One more calendar year is in the can, and for me, it was a year of (potential) endings, a year of (potentially) new beginnings, and a year where things changed much more so than I expected in my teaching practice. I won't dwell too much on the endings and beginnings in this post because those things are best viewed in retrospect, so they might take a few years to distill down to a post. I did want to reflect a bit on my teaching practice. As I look back at the year of teaching, I think this year marked an inflection point: the year I started seeing AI-slop as student submissions. Interestingly enough, it wasn't my graduate students, but my faculty learners in some of the various workshops I facilitate that submitted such work as part of their coursework.
Can I prove it? No. No, I can't.
Do I have a strong Spidey Sense about people using ChatGPT? Absolutely!
Was it technically prohibited? Again, no. No, it wasn't.
How do I know? I've experimented ChatGPT and Claude enough to be able to spot something that looks like AI-slop... The telltale sign in this case was a lot of bullet-point lists, which also had a lot of emojis, and also were quite generic in nature - these things could be applied to a Philosophy class, or an English Literature class, or a History class. Granted, there was some customization here for the content of the class, based on what workshop participants had submitted in prior weeks as their area of focus, but that could have easily been in the prompt. I really had no way of discerning if folks had manually edited the slop that came out, or if they just copied/pasted it into the assignment prompt. To add to this, there was a small number of folks who also worked weeks ahead of where we actually were with the cohort, as a result, the slop-submitters were submitting things 4, 5, 9 weeks in advance.
Now, don't get me wrong, when I am in the student's seat, I also work ahead. Life can get unexpected, so I read ahead. If I have the forum post prompts, I compose my initial post ahead of time (although I don't post it until the week of). And, if I have assignment details, I try to get those started during my downtime. This inoculates me in case things get busy in other facets of life, and "gifts" me free time at the end of the term in case things all fall into place. Working ahead isn't an issue; it's all of these factors combined that present an issue.
As a teacher/instructor/facilitator, I give feedback to the submissions that folks have submitted in my various classes and workshops. It's through feedback that we grow. I found myself feeling like I wasted my time with some of the submissions that I received this year because I was using my "free" time (outside of my day job hours) to give at least some feedback to people who submitted something. By the end, I may have gotten a bit resentful that I was potentially commenting on AI-Slop. Now, I am getting compensated for this kind of professional activity, and in my 20 years of teaching professional development workshops, I've always had people phoning it in (for one reason or another), but there was an understanding: You phone something in, I'll give you some feedback. You put in some half-assed work, and maybe my feedback will spark an "AHA!!!" moment. At least the phoning in included your own work. Now, phoning it in basically means 10 minutes with some prompting and copy/pasting into a submission prompt. No thought. Just bypassing all (or most) cognitive processes in course design. Meanwhile, my feedback is easily something that takes double that, which means that a large portion of my weekends (and some afternoons) were spent reading, pondering, processing, and ultimately responding to AI-Slop. I could have been playing Xbox or going to the gym...
I acknowledge that I am a bit burned out from all the things this year and I might be more crabby than usual, but even when I took courses because it was a requirement for something (e.g., Gen Ed, Concentration Requirements, Professional Requirement, whatever) that I wasn't particularly keen on, I at least tried my best to put deliverabes together that resembled something passable. I am not understand why faculty who want to "AI-Proof" their curricula use LLMs to submit slop for their own professional development.
I am hoping that my spring semester "sabbatical" from teaching will help recharge those batteries for my summer and fall teaching.
Blackboard Ughs...
%26-%12-%2025, %R Blackboard, LMS, teachingA great number of years ago, I was part of an edtech team tasked with evaluating learning management systems to move to after WebCT was essentially EoL'd by Blackboard. Long story short, despite our recommendations, management went with Blackboard, which I guess now is classed "Classic," and it too has been EoL'd. I wasn't a big fan of Classic, but it worked fine for what we needed it to. When the time came for the next LMS, my institution went to Canvas (finally...). I've seen Ultra, the next incarnation of Blackboard, and all I could say was "me'h." I was just happy I didn't have to use it ;-)
Fast forward a year, and I am teaching for an institution that uses Bb Ultra. On the plus side, the course was already designed for me, so I didn't really have to figure out a ton of the nuts and bolts of Ultra ahead of the semester, and it seems pretty easy to pick up. There are some big "ughs" when it comes to using this system, however. My three pet peeves here are read notifications that don't go away, a UI that seems to isn't great, and the kindergarten way of building content.
While building content wasn't on top of my list (the course was built already, after all), trying to put something in an announcement, in a way that didn't treat me as a child, was frustrating. While the blocks of content that one can drop into a page (or page-type) is interesting, there is very little wiggle room for customizability. I can either have formatted text, or HTML, for example. I can't just go into the HTML for a page I've formatted with Bb and then post a snippet of HTML embed code. This makes the page layout a bit long and scroll-y, IMO.
The UI I got used to, for the most part, over the course of the term, but the number of times I hit that stupid "X" button and expected to go back to my previous page/setting/location only to be dropped out of my course entirely was too high. I have yet to get used to the horizontal menu items on the top of the page, as if this is some kind of application in my computer. Granted, at least Blackboard is playing a bit with the UI, so I respect that to some extent, but I wish there were a different way to navigate that was more "classic." The top-to-bottom accordion style for content is also not that great...
Finally, let's talk about there is a disturbing lack of ability to mark discussions as read/unread with ease. If you hover over a discussion contribution long enough, it will be marked as read, but if you read quickly or skim, that won't happen. Also, if you have unread posts (or Bb thinks you do...), you'll get the number of unread posts showing on your course homescreen next to discussions (cool, I think), however, when you click on that "discussions" component from the menu, you may end up seeing NO discussions as being unread. My best guess is that some response to a response to a response in a discussion forum is marked unread, and the main thread is showing as all caught up, while the unread posts marker is still there 🤦♂️. When you are trying to determine if there are posts that need attention, this idiotic marker doesn't make things easy.
Overall, this isn't the worst LMS I've used in my 15 years (or more?) of teaching online, but I would have expected some better UX from such a big company with decades of experience supporting LMSs...
![]() |
| Screenshot of Blackboack Ultra (not mine) |
Turn it in...facepalm
%15-%10-%2025, %R AI, AIEd, assessment, grading, plagiarism, teachingThis semester, I am helping co-facilitate a course somewhere new. More details of that in the future (maybe), but for the time being, I wanted to reflect a bit on technology use. Since this is a new institution for me, I have to click on the technology acceptance pop-ups, see policy dialog boxes that I no longer see at my institution, and so on. One of the assignments I am grading this term requires the use of TurnItIn, and while perusing through the course to familiarize myself with the course, I clicked on TII. Since I am new to this instance of TII, I got greeted with a disclaimer about AI detection (bolding my own):
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (i.e., our AI models may produce either false positive results or false negative results), so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an organization's application of its specific academic policies to determine whether any academic misconduct has occurred
Alright, fine. It's a fine disclaimer; there should be human judgment determining whether something breaks your academic code of conduct. However, I was wondering what this thing flags. So, since I have access to a previous semester's course shell, I popped in quickly to see what last year's student submissions were for this assignment to get a sense of what TII flags. Pretty much all of the assignments were flagged with about 20% suspected AI use... Ooooohhhh kay.... I decided to have a closer look to see what was marked as AI. Having used TII for stuff at my own institution, I know that their 'regular' plagiarism detector usually "detects" things that aren't really plagiarism... Looking at the highlighted passages that are suspected to be generated by AI, I have absolutely no idea how any human would go through and determine if these passages were AI-generated. How does one determine if something is truly created by a synthetic text extruding machine? To accuse a student of academic dishonesty would require solid proof, and all TII does is sow the seeds of anxiety and doubt amongst faculty that students have cheated, with no way of being able to verify such claims.
So...I guess my takeaway is: Tell me you're marketing something that doesn't work without telling me you're marketing something that doesn't work...
All AI Use Case Claims Need Verification - Period.
%29-%09-%2025, %R AI, ChatGPT, experimentA few weeks ago, I saw this posted on a friend's social media feed, and the response was also repeating the "oh wow! I would have never thought of doing this! This is so nifty!
Well... my first thought, knowing how ChatGPT works, is that this is a complete bunch of bullshit, and I immediately jettisoned this idiocy from my brain. Over the last few days, I've been thinking that I really should try it to see if it does actually work. After all, I wouldn't be doing my due diligence if I didn't check this use case out. If it does, it could be useful for accessibility purposes.
However, I didn't want to snap a photo of my own bookshelf, and I've been looking for a sample photo to upload to ChatGPT to test this out. Last night, I was browsing Reddit and I saw a post from the hotsauce subreddit, and I found my use case. So, I downloaded the picture, submitted it to ChatGPT (yeah...sorry about that), and asked it some questions. I've added the photo (which may be downscaled by Blogger at the end of this post).
Then, I got to asking questions. First, a curveball, ask for something that doesn't exist.
Prompt: Help me find the tabasco sauce. Where is it?
Response: I looked closely at the shelves, and I can see the Tabasco sauce bottles on the top right corner of the display.
They’re the familiar small bottles with the diamond-shaped Tabasco logo and red caps, placed in front of the green cylindrical packaging tubes.
Verdict: ❌
Total failure, but OK. Let's ask it for something that DOES exist...
Prompt: Where is Melinda's hot sauce?
Response: I see the Melinda’s Hot Sauce bottles on the right side of the shelves, about the third row up from the bottom.
They’re grouped together in a row, with their distinctive white caps and colorful labels featuring the portrait of a woman (the Melinda’s logo).
Verdict: ❌
Prompt: Where's the fat cat hot sauce?
Response: I found it — the Fat Cat Hot Sauce bottles are on the left side of the shelves, about the fifth row up from the bottom.
They have white caps and labels with a simple cat face logo, making them stand out from the more fiery and graphic hot sauce labels around them.
Verdict: ❌
Prompt: Where is the spicy shark hot sauce?
Response: I spotted the Spicy Shark hot sauce bottles on the left side of the display, about the fourth row up from the bottom.
They’re the bottles with white caps and colorful labels featuring a shark illustration, lined up together in a row.
Verdict: ❌
Prompt: Where is the orange krush hot sauce?
Response: The Orange Krush hot sauce is on the left side of the display, second row from the top.
You can spot it by its bright orange label and cap, standing out among the darker bottles around it.
Verdict: ❌
While this is not expensive testing, the fact that it could not find ANYTHING, and it hallucinated my fake hot sauce, it's clear to me that this is not a use case for ChatGPT and LLM-based "AI." I really think that folks should not just hit "share" on anything related to LLMs until they independently test things (at least if you're into tech and have the means to experiment; I don't expect this from my dad or other lay folk)
Nose to the AI grinder and course dev ponderings
%12-%07-%2025, %R AI, AIEd, appliedLinguistics, CALL, courseDesign, EdTech, teachingWell, it's been a hot minute since I last jotted down some thoughts. Don't worry. blog, it's not you, it's me 😂. I also have a daily meditation/reflection journal that I used to jot things down in, even though sometimes it was "Doogie Houser style," that only gets an entry twice or thirce a week. Gotta take a step back and do some more reflecting. On the plus side, the weather is finally nice enough to get out and do a daily walk, something I used to do most summers until work (in all its multiple facets) got to be a bit too much.
In any case, you didn't come here to read about my (lack of?) exercise and my busy schedule, did you? I thought I'd jot down some thoughts since it is July and summer is one-third gone (sigh). One of the things that I am not making as much progress on as I would have liked is course design. In all honesty, I thought that by now I would have chosen all my readings for the fall, started to put them into Canvas, and have August free to just chill for the rest of the summer. That's not happening... While I have evaluated one book (and a whole bunch of articles), my balance is a bit off, despite having around 40 readings so far for my class. The main issue I keep coming up against is that there is so much AI-Boosterism out there that the more balanced CALL stuff I want to include might seem irrelevant to learners. The "we gotta teach AI because learners will need it in the future, it's inevitable" is a discourse that you can't avoid. So yeah... Trying to navigate the fresh, actually stale, hype of it all.
On the plus side, there are a few things that have happened! I created my Academic Article resources list, and my Podcasts Resources list, things that students will need for one-third of the materials each week (plus one of their major assignments), so things ain't all that bad. I've also started prepping the options for the major term project, so that's going well. I just need to develop some directions about what to do with each
The other notch on the "plus side" of things is that I've gotten my CALL groove back. Back in 2012(ish) when I was examining my options for a doctoral program, something in CALL was an area that I was seriously considering. As part of the work in that area, I spent a lot of time reading CALICO and ReCALL articles, and through my evaluation of resources so far, it's really great to see (or in many instances hear) a lot of familiar names of researchers that I recognize from back then. I think it's really lit a fire that I've been kindling for a while. The other cool thing is that I've seen so much overlap between applied linguistics/CALL with instructional design in what I've read so far. I always knew this was the case, but it's really great to evaluate book chapters that make those connections much more concrete.
On a slightly different note, as I've been pondering this course's final project over the past month (and reading materials for inclusion in the course), one of the things that stood out to me was that a survey course on EdTech (with a focus on language teaching and CALL) is that it's not just geared toward teachers. Sure, while teachers may be the majority of folks enrolled (given the type of program I work for), there are other personae who might find the course valuable. The Technology/Teaching Coach, the Educational Technologist, the CALL Imagineer, and the EdTech Researcher all come to mind as potential students in the course.
The famous saying "T∞ knąw thgselϝ is the begin Ϸominutius" - Yup
%06-%05-%2025, %R AI, AIEd, ArtificialLanguage, ChatGPT, experiment![]() |
| ChatGPT Patch of the Wise Owl |
Recently, I've been playing around with image generation in ChatGPT, not so much to create output that I plan on using seriously for something (although some output do end up on this blog as post images), but more to see how easy (or hard) it is to get something from my mind's eye into some kind of machine output. I am also curious to see how the LLM interprets what I input (that element of surprise).
I only really have the free credits that OpenAI gives to its free users, to my experimentation is basically 10-15 minutes of futzing around while watching TV in the evening. As I was playing around the other day, this scene from Star Trek: The Next Generation came to mind. In Schisms, the crew had been abducted by an alien race but had no memory of it (think Alien Encounters of the Third Kind). As they start to remember small elements of their experience, they all try to piece together their memories so that they can come up with a reconstruction of the probing table they were on.
I've included an excerpt from the show in this post. Back in 1992, this was a big of a "whooooaaah!!!" moment. Look at what the holodeck and the Enterprise computer can do!!! Playing with ChatGPT to make something like this happen... now so much! [although, maybe I should take the directives that the Enterprise crew gives to the holodeck and see what ChatGPT comes up with haha😅].
My experiments focused on creating something small and concrete, like embdoidered patches. I was interested in seeing what the technology does with mashups, like "a morale patch for a D&D players group that cosplays as Thundercats and loves Captain Planet." Some of the results were interesting (I'll admit to that), other results were humorous, so I got a good laugh. When something interesting came up and I wanted to make slight modifications, that's when things just totally broke. Text generation for images is still problematic, even when you instruct the machine to give you something specific like "To know thyself is the beginning of wisdom" ends up being correct on the first attempt (sometimes), but then as you ask the machine to update the graphic and keep everything else the same, somehow the quote becomes "To know thyself is the beginnnnn dom." Further prompts to fix this error by correcting it with "To know thyself is the beginning of wisdom" make the text become "T∞ knąw thgselϝ is the begin Ϸominutius." The same is true for adding visual elements to your creation. If you ask the machine to add a small element on a specific part of the image, the entire image is getting reworked, and elements that you liked are now changed and modified🙄.
So, unlike Schisms, where the crew of the Enterprise got their examination table (in a short time I might add), playing with image generation and modification on ChatGPT feels like a game of Marco Polo, or Hot/Cold. You think you're getting hotter with every successive iteration and prompt, but then something happens and you're cold again...much much colder...
Fun parlor trick (that consumes A LOT of energy), but useful? Nah...
Archive
Jan 2026 (1)
Dec 2025 (2)
Nov 2025 (2)
Sep 2025 (1)
Aug 2025 (1)
Jun 2025 (1)
Apr 2025 (1)
Mar 2025 (1)
Feb 2025 (1)
Jan 2025 (1)
Dec 2024 (2)
Oct 2024 (2)
Sep 2024 (1)
Aug 2024 (5)
Nov 2023 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)









