Multilitteratus Incognitus
Prompting is the Problem (?)
%21-%04-%2026, %R AI, AIEd, EdTech, LLM, magazine, snakeoilIn a recent(ish) issue of TD, I came across an article titled "Prompting is the problem," which piqued my interest (archive link in case you don't have a subscription).
I think there are pros and cons here, so I don't want to dwell on just the negative. For example, the author writes that...
"Here's an inconvenient truth: Because AI systems are probabilistic and in motion, the same request won't always give the same answer—and the same model won't behave the same way month to month. Research demonstrates measurable behavioral drift across major large-language-model providers."
"That scenario illustrates why syntax-first prompt training backfires. When we teach people to chase consistency from a tool designed for variation, the result is hesitation, not value. Rather than treating AI prompting as the skill, start teaching people how to think with AI—iterate, question assumptions, and use conversation to move the work forward."
My gripe here is that for the last year (or two...or three) the blame about LLM outputs has been resting solely on the shoulders of end-users. "Oh, you're doing it wrong," or "maybe if your prompts were better, you'd get better outputs." This drove folks to books and workshops for better prompting, but it's all pretty pointless, IMO, because results are still probabilistic slop. Sure, you may be able to get something that approximates what you had in mind through "better" prompting, but in a teaching and learning scenario, you'd want consistency (I'd argue) over probabilistic outputs.
The author claims that...
"AI systems are valuable because they show users their own thinking back. The real skill is metacognitive. Instead of users stopping at what the tool gives them, they should push it to help them see what they missed. That is the antidote to cognitive offloading. Users should actively engage with the AI model's outputs to strengthen their own thinking. Every output is a chance to challenge a user's defaults, surface blind spots, and strengthen the individual's reasoning."
There are two threads here. In one thread, I feel like the author is trying to get a bit Vygotskian here and treating the LLM like a more knowledgeable other, which they are not. An MKO consciously pushes you just beyond the reach of your own understanding and helps you grow. An MKO, by definition, is more knowledgeable, and you (in theory) don't need to second-guess what they know. You can trust that they are presenting you with current information and know-how, and you can grow your practice. With an LLM, you'd need to know what's already been given to you as an output in order to better assess it, and at that point, what's the point of using this as a learning tool? The other thread is that the LLM becomes a mirror for your thoughts. Cool, I guess, but do we really need a modern-day ELIZA? The evidence, so far, is that LLMs basically become our little yes-men. Is this helpful in learning? If what is being reflected to us is not something that is useful and continues to steer us down the wrong path, is this helpful or detrimental to the learning process.
Finally, the author writes...
"Technique-first training creates exactly the pattern that Your Brain on ChatGPT warns about: People learn to focus on getting the prompt right instead of determining whether the output is valid, complete, or useful."
I think this is the biggest failure in my view. Let's say that I want an LLM to give me a literature review on a corpus of 100 articles. I haven't read any of them, so I don't know if they are useful or quality research, and I wouldn't be able to tell you what the main points are with regard to how I want to use them in my research. To do that, I'd need to do the actual work of reading, assessing, thinking, and cognitively processing those 100 articles. To ask an end-user to determine whether the output is valid, complete, and useful negates the point of having the LLM do the work in the first place.
In the end, it feels like a lot of these articles are just making excuses as to why the technology doesn't work, and how it can work better, rather than writing this tech off completely, or going back to the lab to experiment a bit with it, rather than trying to shoehorn it into everything.
Archive
Jan 2026 (1)
Dec 2025 (2)
Nov 2025 (2)
Sep 2025 (1)
Aug 2025 (1)
Jun 2025 (1)
Apr 2025 (1)
Mar 2025 (1)
Feb 2025 (1)
Jan 2025 (1)
Dec 2024 (2)
Oct 2024 (2)
Sep 2024 (1)
Aug 2024 (5)
Nov 2023 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)
