AAAL wrap up30-03-2012, 18:49 AAAL, Conference, linguistics
There were a number of really interesting sessions, and some not so interesting ones (mainly because it seemed like a lot of hand wringing and self-flagellation on the part of my colleagues presenting). In the next few days I will be posting about some of the interesting stuff I learned (or at least witnessed) at the AAAL conference. For today though, I just wanted to write how surprised I was that there were only one or two people tweeting at the conference, and then again no so much. I did notice a lot of people taking notes on laptops, and on good, old-fashioned, legal pads - but the technology wasn't quite a big thing at this conference. Oh well, perhaps next year in Dallas there will be some more tweets from the conference.
An an aside, I would have liked to have met these few twitterers in person, but in such a massive conference it didn't quite happen.
From the conference I walked away with two new books, The Handbook of Business Discourse and The Language of Gaming, more on these once I read them :-)
AAAL Conference, Day 4 Liveblog27-03-2012, 07:11 AAAL, Conference, linguistics, liveblog
Check back for liveblog updates
AAAL Conference, Day 3 Liveblog26-03-2012, 07:09 AAAL, Conference, linguistics, liveblog
Check back for liveblog updates
AAAL Conference, Day 2 Liveblog25-03-2012, 19:09 AAAL, Conference, linguistics, liveblog
American Association of Applied Linguists 2012 - Boston Conference24-03-2012, 19:07 AAAL, Conference, linguistics, liveblog
Also, my first foray into live blogging with storify
Commentary on Commentary on Comments23-03-2012, 08:24 #change11, MOOC, participation
As far as comments on other blogs go, I can't really talk on behalf of other Change participants, but from my point of view, and from my work-flow aspect, most of my reading is offline, using ReadItLater on my iPad. this means that in order to comment I need to remember to consciously go back to that blog post and comment on it. Sometimes if comments are longer, like this one, I tend to just post things on my own blog.
As far as posting new blog posts, I too have noticed a sharp decline in Change11 posts. Personally I think that this is a function of the length of this MOOC...in other words it's too long. This was billed as the mother of all MOOCs last summer, and perhaps in terms of length it is, but is longer really better? Shouldn't the content speak for itself? Shouldn't the interactions take central stage? I think this long length is detrimental to the cohort that started last September. What makes a MOOC valueable are the connections you are making with other participants and the conversations and cognitive development that occurs because of those connections and interactions.
When people become fatigued, they stop posting as much, when this happens they either drop out, or lurk. If they lurk you can entice them to participate more by getting new cohorts in; at the same time those cohorts need to be at the same(Ish) levels as the lurkers otherwise it feels like waiting for people to catchup. Some times you will get some really good, original, though provoking blog posts that can spur discussion, but for me, by and large, I haven't felt like looking at those blog posts that deal with last October's Topics.
With this lack of peers posting (a variety of peers most importantly, not the usual suspects) a MOOC can go into a death spiral. For what it's worth, there are some interesting topics near the end of the MOOC but at the same time I see a lack of participation. This means that a MOOC is becoming more and more just another self-paced course, that is done alone, because fewer and fewer people participate.
Thus we come back to a chicken or the egg dilemma. You need some participation to spawn more participation, but how do y get a seed of participation? At the beginning of any MOOC the advertising about the MOOC bring together some usual MOOC suspects (including me) which bring enthusiasm and this enthusiasm brings others on board. Once those initial people move onto other MOOCs, or if they decide that this specific MOOC isn't for them, then you need to have a certain base the maintain the discussion. In MobiMOOC for example our research showed that there was a core group of people that participated throughout the MOOC, which helped to get others on board. The problem with Change is that the core has departed (or seems to have departed)?
NERCOMP Conference summary(ish)22-03-2012, 17:30 Conference, NERCOMP, Presentation, storify, web2.0
In any case, here is a quick recap of our two presentations:
Monday was GoodReads day, Christian presented his proposal for Social Reading (and reading outside of the class) using GoodReads in a lightning round - I helped him a bit flesh out the concept and the flow, and also served as the computer master for the presentation showing off things he was talking about. It was a really great presentation, and we actually ended up getting a lot of questions afterward. I liked the Lightning round format because it forced you to be concise (15-20 minutes in length) and to really think about your message. This was my first lightning round, and I think I would do it again.
Mobile Enabled Research:
This was my poster session on Tuesday, again pretty well attended. The poster sessions were setup pretty nicely, they had their own area which encouraged people to walk around, read and engage in conversation. We ended up getting a lot of questions on using iPads to facilitate the research process and, in many cases, dis-intermediate the desktop. This was also my first Poster ever created. This too was a fluke as I thought I had requested a full session. In any case, I liked the poster format because it allowed us more time to converse with a lot of interesting people, rather than just have a "broadcast" Session, which is what our full session would probably be formatted as. I think I will probably do more posters in the future. The foam-core mounting also wasn't that expensive.
Overall, I met some pretty interesting people at NERCOMP, and saw some familiar twitter faces out there as well. The expo floor as a bit light, when compared to campus technology, but this meant more talk with peers from other institutions. Looking forward to next year.
Here's my little Storify Story:
It doesn't all start with engagement12-03-2012, 17:00 #change11, assessment, Design, engagement, instructionalDesign
I was reading a post on change11 the other day and this video was talked about. The essence of th video is that in education these days we've gone crazy with assessments and we forget about th learner. Fair enough, I believe that this is indeed true in some states and school systems, especially with things like no child left behind.
The problem comes in (for me at least) when the people start talking about engagement first, and not learning objectives or something to assess. They actually see the assessment/objective first as wrong, and this is where they lose me. As educators we need to start off with a certain goal in mind, once th goal (end state acquisition and/or behaviors are known) we can then decide how we would check for these behaviors. Yea, I am putting assessment in second because it's important to know how we will be assessing what we want out learners to know.
Once we've worked out goals and assessment methodology, we can then move onto content, perhaps not 100% of th content, but a sizable amount that constitutes a core, additional content will be determined after the learners have been determined and they strengths, needs, and interests as determined. You can think of several possible scenarios while developing these materiala. Learner engagement comes in at the end once you know who your learners are.
I am getting a vibe here that these folks are overreacting to poor implementation of a good method, and to unresponsive teaching. Just because the water is dirty, you don't throw out the baby with the bath water. Engagement is a very important factor in education, but it is not ncessarily the starting point, at least not in classrooms of 15+ students. It may, however, be interesting from a tutor or homeschool perspective where you've got one on one time, and usually loads of it.
On comprehensive exams11-03-2012, 15:03 #change11, academia, assessment, PhD
I was reading an opinion piece on the Chronicle of Higher Education this past week on Comprehensive exams. The article deals mostly with PhD level comprehensive exams, the types of exams that serve as the gatekeeper between the coursework in a PhD program and the dissertation stage. The main thesis of the author, at least what I got out of it, was that comprehensive exams seem to be looking backward on the curriculum, a memorize and regurgitate model, rather than looking forward toward a synthesis of existing information (gained through coursework) that leads to new knowledge.
From my own experiences, I had to take comprehensive exams for one of my masters degrees (Applied Linguistics) and the buildup toward those comprehensive exams was nerve-wracking because the comps were like the academic boogey man. Everyone (students) in the program, who had not done comps before, was feeding the fear, uncertainty and doubt (FUD) of every other student who had their own FUD. People who took the comps were more easy going about the comps. The main thing that really scared everyone was that
- We hadn't done written comps before so we didn't know what was required of us
- We only had 2 chances to pass the exam, so if we didn't those 30-credits, in the end, meant a lot of wasted time and potentially money, since you wouldn't be receiving a degree if you didn't pass.
In the end, the great majority pass on the first try, and even those who don't pass on the second try - since they receive coaching and mentorship from the faculty if they don't pass the first time. I passed as well, with a high pass, something that surprised me given my anxiety over the exams. Our comps didn't focus on regurgitation, but rather on information synthesis. We were given broad statements and we were asked to explicate and take a stand, based on the literature that we had come across. Some of the questions were modified or expanded versions of what we had done in class, so it wasn't completely out of the blue.
Going back to the author, I do agree that written, or oral for that matter (even though I haven't done oral exams), comprehensive exams, even those that I took, focus on the quick-wittedness of the student. How are they able to perform under pressure, and with questions that they potentially haven't seen before. While it is true that in a real world environment those questions do come up, and you do have to take an initial stand on them, you often have the ability to take a break from the debate or conversation and look things up, or come back to issues later on. It is infrequent that people are asked for a major decision, or in-depth explication on the spot.
Perhaps a better assessment of a student's prior learning, and their ability to synthesize information, lies with some sort of Qualifying Paper (or series of Qualifying papers), where they take what they've done in class, they take their own individual readings, and put forth a series of papers that are of publishable quality. It might even be a bonus if these papers tie into the dissertation process (for PhD students anyway) so that they QP work ties into what they are doing, they are demonstrating that they could be successful in the dissertation stage, and they have some work on their dissertation done already before they are officialy a PhD Candidate (not to mention that they come out of the experience with a couple of papers published, or nearly published, which is important in that line of work).
I am wondering what others think about comprehensive exams in general. Like? Dislike? Love-Hate? :-)
Lessons from LMS core concepts session10-03-2012, 05:47 librarianship, NERCOMP, teaching, workshop
My initial thought on a course on Learning Management Systems was that students would get hands-on time with learning management systems (3 different ones, mostly their choice) and that they would have to objectively test these Learning Management Systems based on a rubric that made sense for their organization (or the organization that they were contracted to work for - if this were a case study). Beyond that the theory aspect seemed a little light, and I didn't just want to create another "hands on" course that could become obsolete in the next five years - this isn't good for the learners.
What I came out of the session with is something similar to what the structure of a typical "introduction to reference work" is in Library School. It is true that in Library School, in a reference course you get a lot of hands on with many different types of information sources (or at least you should), but the other component of the coursework is about being an effective reference librarian - asking the right questions, teasing out what the library patron is really there to find, and helping them find it. The same types of recommendations came out in this sessions. It's not really all about the specific instantiation of the LMS, but rather being able to go out and successfully conduct design interviews (think reference interview for instructional designers) with your subject matter experts teaching the course and finding which tool is the best tool for what they want to do.
I think I now have a bit better understanding of the theoretical framework for this LMS course, it will probably be focused mostly on case studies and communication related articles for those interviews, and as a semester long project you'll have the comparative LMS pieces.
If anyone has any more ideas, feel free to chime in :-)