Multilitteratus Incognitus
Pondering what to learn next 🤔
MOOC Evaluation: Beyond the Certificate of Completion
01-02-2014, 13:19 assessment, cMOOC, MOOC, SloanC, xMOOCNOTE: this is a report of the post I wrote for Sloan-C back in November of 2013. I am reposting here as a backup. The original can be found here http://blog.sloanconsortium.org/2013/11/18/mooc-evaluation-beyond-the-certificate-of-completion/
This coming January will be my third year of involvement in MOOCs. Questions have come up in the last year around the issue of why students “drop out” and how to better retain students. Tied to these questions is the issue of evaluation of learners and learning in MOOCs. At this point, I’ve witnessed at least three different kinds of MOOCs, and they all approach evaluation somewhat differently.
During my first year all MOOCs were of the cMOOC kind. This included, among others, LAK11 (Learning Analytics), CCK11 (Connectivism) and MobiMOOC (mLearning). There was not an evaluation of learner knowledge acquisition component in these MOOCs because the MOOCs were focused on community and emergent learning. This meant individuals made their own goals, within the framework of the course, and worked out a plan to attain those goals. In the end, they were only accountable to themselves and any sponsors they might have had for participating in the learning activity. This lack of external accountability earned cMOOCs the nickname “massive open online conferences” instead of their original “massive open online courses.” This was OK, as far as I was concerned, because I was happy to learn new things instead of having to show it on a piece of paper.
When Stanford, MIT, Harvard, and others dubbed “elite” Universities, decided to join the game, learner evaluations came into the picture, and came in a systematic way. This is partially because these courses were converted from current campus courses and evaluations were part of the norm. This brought up new considerations such as: How does one evaluate hundreds of thousands of students? Even in a “mini-massive” course of hundreds of students there is an issue in evaluation because it takes so darned long. As a result, automated testing, by way of multiple-choice quizzes, and peer reviews entered the picture. Big data and crowd sourcing also seemed to provide answers. In the end, you received a nice little certificate of participation if your overall grade was above a certain percentage. In this sphere, the “with distinction” mark was also available for students who went above and beyond the minimum requirements. As I’ve written elsewhere, as the requirements vary from course to course, the “with distinction” mark means little since there is no standard rubric for it.
Now, we’ve seen other MOOC practices emerge. One recent category is the project-based MOOC (or pMOOC). The OLDS MOOC, and, more recently one could argue, Mozilla’s Open badge MOOC, fall into this category. In this type of MOOC, participants work on a project (or projects) throughout their involvement in the MOOC. The projects receive student comments by peers designed around improvement or they are evaluated by a team. The work seems substantial enough to keep achievement hunters (those just looking for a quick path through the MOOC in order to get a piece of paper, or a badge) at bay.
The question of learner evaluation in MOOC environments is quite big. Yet, it all comes back to one fundamental question: What is the final outcome of your MOOC? The “C” in MOOC is for “c”ourse. We have this notion in our heads that courses have evaluations and grades. Perhaps it’s time to reassess this aspect, just as we need to reassess the significance of retention rates in MOOCs. Some self-check feedback is probably worthwhile in any course, MOOC or not. In smaller courses, establishing that you are on the right path might be as simple as a discussion forum or discussion with peers and the instructor, so no test is needed. In MOOCs, depending on the subject, some automated testing may help. Peer reviews (not peer grading) may help in building a community of learners that help scaffold each other’s learning endeavors.
Evaluation as a means of self-check has its place. The proof, however, on whether you can put this knowledge to use, is in practice. A piece of paper saying you participated in a MOOC is for now not worth the paper to print it. Institutions offering MOOCs do not give you credit for the course, other institutions don’t accept it for credit, and no one recognizes, at this point, that piece of paper. Even Coursera’s signature track, with proctored exams, does not yet gain recognition. So, at the end of the day, if learners aren’t getting some external recognition of their learning, what is the point of formal graded evaluations in MOOCs? I would argue that it’s time to go back to the drawing board. When designing MOOCs, do a learner and learning outcome analysis, and work toward development of MOOCs that makes sense for that environment. Then work on evaluation mechanisms that make sense for your stated course goals.
What are your thoughts on the subject?
This coming January will be my third year of involvement in MOOCs. Questions have come up in the last year around the issue of why students “drop out” and how to better retain students. Tied to these questions is the issue of evaluation of learners and learning in MOOCs. At this point, I’ve witnessed at least three different kinds of MOOCs, and they all approach evaluation somewhat differently.
During my first year all MOOCs were of the cMOOC kind. This included, among others, LAK11 (Learning Analytics), CCK11 (Connectivism) and MobiMOOC (mLearning). There was not an evaluation of learner knowledge acquisition component in these MOOCs because the MOOCs were focused on community and emergent learning. This meant individuals made their own goals, within the framework of the course, and worked out a plan to attain those goals. In the end, they were only accountable to themselves and any sponsors they might have had for participating in the learning activity. This lack of external accountability earned cMOOCs the nickname “massive open online conferences” instead of their original “massive open online courses.” This was OK, as far as I was concerned, because I was happy to learn new things instead of having to show it on a piece of paper.
When Stanford, MIT, Harvard, and others dubbed “elite” Universities, decided to join the game, learner evaluations came into the picture, and came in a systematic way. This is partially because these courses were converted from current campus courses and evaluations were part of the norm. This brought up new considerations such as: How does one evaluate hundreds of thousands of students? Even in a “mini-massive” course of hundreds of students there is an issue in evaluation because it takes so darned long. As a result, automated testing, by way of multiple-choice quizzes, and peer reviews entered the picture. Big data and crowd sourcing also seemed to provide answers. In the end, you received a nice little certificate of participation if your overall grade was above a certain percentage. In this sphere, the “with distinction” mark was also available for students who went above and beyond the minimum requirements. As I’ve written elsewhere, as the requirements vary from course to course, the “with distinction” mark means little since there is no standard rubric for it.
Now, we’ve seen other MOOC practices emerge. One recent category is the project-based MOOC (or pMOOC). The OLDS MOOC, and, more recently one could argue, Mozilla’s Open badge MOOC, fall into this category. In this type of MOOC, participants work on a project (or projects) throughout their involvement in the MOOC. The projects receive student comments by peers designed around improvement or they are evaluated by a team. The work seems substantial enough to keep achievement hunters (those just looking for a quick path through the MOOC in order to get a piece of paper, or a badge) at bay.
The question of learner evaluation in MOOC environments is quite big. Yet, it all comes back to one fundamental question: What is the final outcome of your MOOC? The “C” in MOOC is for “c”ourse. We have this notion in our heads that courses have evaluations and grades. Perhaps it’s time to reassess this aspect, just as we need to reassess the significance of retention rates in MOOCs. Some self-check feedback is probably worthwhile in any course, MOOC or not. In smaller courses, establishing that you are on the right path might be as simple as a discussion forum or discussion with peers and the instructor, so no test is needed. In MOOCs, depending on the subject, some automated testing may help. Peer reviews (not peer grading) may help in building a community of learners that help scaffold each other’s learning endeavors.
Evaluation as a means of self-check has its place. The proof, however, on whether you can put this knowledge to use, is in practice. A piece of paper saying you participated in a MOOC is for now not worth the paper to print it. Institutions offering MOOCs do not give you credit for the course, other institutions don’t accept it for credit, and no one recognizes, at this point, that piece of paper. Even Coursera’s signature track, with proctored exams, does not yet gain recognition. So, at the end of the day, if learners aren’t getting some external recognition of their learning, what is the point of formal graded evaluations in MOOCs? I would argue that it’s time to go back to the drawing board. When designing MOOCs, do a learner and learning outcome analysis, and work toward development of MOOCs that makes sense for that environment. Then work on evaluation mechanisms that make sense for your stated course goals.
What are your thoughts on the subject?
Comments (1)
Archive
Nov 2023 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)
Aug 2023 (1)
Jul 2023 (1)
May 2023 (1)
Apr 2023 (4)
Mar 2023 (5)
Feb 2023 (2)
Dec 2022 (6)
Nov 2022 (1)
Sep 2022 (1)
Aug 2022 (2)
Jul 2022 (3)
Jun 2022 (1)
May 2022 (1)
Apr 2022 (2)
Feb 2022 (2)
Nov 2021 (2)
Sep 2021 (1)
Aug 2021 (1)
Jul 2021 (2)
Jun 2021 (1)
May 2021 (1)
Oct 2020 (1)
Sep 2020 (1)
Aug 2020 (1)
May 2020 (2)
Apr 2020 (2)
Feb 2020 (1)
Dec 2019 (3)
Oct 2019 (2)
Aug 2019 (1)
Jul 2019 (1)
May 2019 (1)
Apr 2019 (1)
Mar 2019 (1)
Dec 2018 (5)
Nov 2018 (1)
Oct 2018 (2)
Sep 2018 (2)
Jun 2018 (1)
Apr 2018 (1)
Mar 2018 (2)
Feb 2018 (2)
Jan 2018 (1)
Dec 2017 (1)
Nov 2017 (2)
Oct 2017 (1)
Sep 2017 (2)
Aug 2017 (2)
Jul 2017 (2)
Jun 2017 (4)
May 2017 (7)
Apr 2017 (3)
Feb 2017 (4)
Jan 2017 (5)
Dec 2016 (5)
Nov 2016 (9)
Oct 2016 (1)
Sep 2016 (6)
Aug 2016 (4)
Jul 2016 (7)
Jun 2016 (8)
May 2016 (9)
Apr 2016 (10)
Mar 2016 (12)
Feb 2016 (13)
Jan 2016 (7)
Dec 2015 (11)
Nov 2015 (10)
Oct 2015 (7)
Sep 2015 (5)
Aug 2015 (8)
Jul 2015 (9)
Jun 2015 (7)
May 2015 (7)
Apr 2015 (15)
Mar 2015 (2)
Feb 2015 (10)
Jan 2015 (4)
Dec 2014 (7)
Nov 2014 (5)
Oct 2014 (13)
Sep 2014 (10)
Aug 2014 (8)
Jul 2014 (8)
Jun 2014 (5)
May 2014 (5)
Apr 2014 (3)
Mar 2014 (4)
Feb 2014 (8)
Jan 2014 (10)
Dec 2013 (10)
Nov 2013 (4)
Oct 2013 (8)
Sep 2013 (6)
Aug 2013 (10)
Jul 2013 (6)
Jun 2013 (4)
May 2013 (3)
Apr 2013 (2)
Mar 2013 (8)
Feb 2013 (4)
Jan 2013 (10)
Dec 2012 (11)
Nov 2012 (3)
Oct 2012 (8)
Sep 2012 (17)
Aug 2012 (15)
Jul 2012 (16)
Jun 2012 (19)
May 2012 (12)
Apr 2012 (12)
Mar 2012 (12)
Feb 2012 (12)
Jan 2012 (13)
Dec 2011 (14)
Nov 2011 (19)
Oct 2011 (21)
Sep 2011 (31)
Aug 2011 (12)
Jul 2011 (8)
Jun 2011 (7)
May 2011 (3)
Apr 2011 (2)
Mar 2011 (8)
Feb 2011 (5)
Jan 2011 (6)
Dec 2010 (6)
Nov 2010 (3)
Oct 2010 (2)
Sep 2010 (2)
Aug 2010 (4)
Jul 2010 (9)
Jun 2010 (8)
May 2010 (5)
Apr 2010 (4)
Mar 2010 (2)
Feb 2010 (3)
Jan 2010 (7)
Dec 2009 (9)
Nov 2009 (5)
Oct 2009 (9)
Sep 2009 (13)
Aug 2009 (13)
Jul 2009 (13)
Jun 2009 (13)
May 2009 (15)
Apr 2009 (15)
Mar 2009 (14)
Feb 2009 (13)
Jan 2009 (10)
Dec 2008 (12)
Nov 2008 (6)
Oct 2008 (8)
Sep 2008 (2)
Jun 2008 (1)
May 2008 (6)
Apr 2008 (1)