Loitering in the Witch’s House: My MOOC Experience

photo credit: perpetualplum via photo pin cc

Whether you love Google or hate it, there’s no denying the fact that the company is at the leading edge of open source apps and educational resources. And whether we like it or not, the majority of students are using Google as their primary research tool (and, according to a study summarized by Sarah Kessler, they’re not using it very effectively). I use Google apps extensively in my hybrid courses and, recognizing a need on my students’ part to learn how to use the internet more effectively and critically, I’ve begun to integrate the Google search engine into my research workshops. So when Google recently offered a MOOC entitled “Power Searching with Google,” I immediately signed up, hoping in the process to kill two birds with one stone: 1) to learn some Google search strategies that I could pass along to my students, and 2) to get a taste for the MOOC experience. It was a mixed bag.

Set-up
In terms of set-up, the course was very straightforward. Lessons consisted of video demonstrations followed by activities designed to test your ability to apply the skills addressed in each video. Assessment consisted of a pre-course assessment (meant to gauge existing knowledge of Google search features), a mid-course assessment, and a final assessment. The scores for the mid-course and final assessments were averaged together to determine your “grade” for the course and a passing grade resulted in a certificate of completion. There was also a discussion forum that you could voluntarily participate in.

Pros
1) Individualized pace: While there were deadlines for the mid-course and final assessments, you could work through the course materials at your own pace as long as you were ready to meet those deadlines. This worked great for me because I could complete individual lessons or entire units as it suited me. Considering the hectic schedule I have this summer, this was by far the most effective aspect of the course for me.

2) Paced release of materials: While I could work at my own pace on the materials available to me, I was limited by the fact that the units were released at a graduated rate. This actually turned out to be a positive for me because, since I couldn’t see the entirety of the course materials at the beginning, I wasn’t overwhelmed by the amount of material I would need to cover and I remained focused on each set of materials I had access to.

3) Do-overs: Both practice activities and assessments were set up to allow multiple attempts at answering questions correctly. You could check your answers before submitting your assessments and wrong answers to practice activities usually triggered some feedback in terms of what to review in order to better understand the skill addressed in the activity. I found this to be a very effective method for learning because I didn’t have a fear of failure hanging over me that a single-attempt set-up would have created.

4) Leveling up or down: While I didn’t actually make use of it, there was the option to change the difficulty level of practice activities to either an easier activity or a harder activity. Again, I see this as being an effective method for individualizing assessment. There was also an option to skip activities and see the correct answers. This was effective for those search functions that I was already familiar with and didn’t necessarily want to waste my time trying out; being able to see the answers allowed me to self-assess my prior knowledge and move forward quickly if I wanted to.

Cons
1) Boring videos: I don’t expect lecture and demonstrations to be entertaining, but I do expect them to be somewhat engaging on an intellectual level. The videos were not long (the longest was a little over eight minutes), and this brevity was their only saving grace. It wasn’t just the fact that the instructor sat on a couch the whole time (I suppose in an effort to make the instruction feel more personal), but the content itself dragged in several lessons. Some lessons were far too simplistic and some were overly repetitive. A boring presenter is boring, whether IRL or on video.

2) Google Chrome required: All demonstrations were done in Chrome, so I could not replicate some of the tasks, such as the Search by Image function, as demonstrated. There was no discussion by the instructor of the different ways to complete these tasks in other browsers, though I did eventually receive help via the forum (after I had completed the final assessment). This often led to frustration on my part. If I had taken this course IRL, I would have been able to ask for clarification from the instructor.

3) Difficult tasks given short shrift: There were a few lessons that contained difficult concepts, such as using and interpreting results on WHOIS databases. There was little time spent discussing and demonstrating how to use these databases (although the instructor acknowledged the difficulties of using them), yet being able to do so was part of the final assessment. As a student, this was extremely frustrating and I quickly gave up trying to figure it out by myself (my frustration is demonstrated with some rather derogatory doodles next to my notes on this lesson and a final assessment of the lesson as “useless”). Again, IRL instruction would have afforded me the opportunity to seek clarification on these muddy points and perhaps encourage the instructor to extend the time spent on the databases.

4) Chug and plug assessment: While the practice activities required direct application of skills, the assessments were multiple choice and fill-in-the-blank problems that, for the most part, simply required regurgitating information from the instructor’s demonstrations. At this point, I’m not really certain of how much of the course I have really learned and internalized and how much I’ve simply managed to maintain in my short-term memory.

5) Forum confusingly organized and asynchronous: The few times that I did try to use the forum, I had difficulty navigating it. It was supposedly organized by lesson, but I could never find a direct link to the discussion threads for a specific lesson and it seems that most people just posted wherever they felt like it. When I posed questions, I did not receive immediate (or even proximal) feedback; the earliest I received an answer was a little over 24 hours after posting the question. Of course, one aspect of open online learning that MOOCs bank on is student participation; they count on the fact that other students are probably online when questions and comments are posted and are likely to respond faster than forum moderators. However, in this particular MOOC students did not seem particularly eager to help each other out or respond to each others’ posts, and all of my questions were answered by forum moderators.

What does this mean for MOOCs?
My initial response to the idea of MOOCs was hesitantly hopeful. Having completed one, I’m pretty much stuck with the same reservations about them that I have for tuition-based online courses. They are inherently more suited to certain types of students, i.e., those who are highly motivated, self-aware learners with good time management skills and a high tolerance for working alone and not having immediate access to and feedback from their instructor and classmates.

In terms of instruction, it requires as much, if not more, effort to make online instruction engaging because it’s far easier for students to become disengaged with an online course, especially one that’s free and has no extrinsic motivations to stay connected and finish. The one thing that’s possible in online course design that MOOCs cannot capitalize on, due to their massive size, is individualizing instruction. I’m not completely sure of the purpose of the pre-course assessment for Google’s MOOC (unless it’s simply for their own data collection purposes) because the rest of the course was not structured based on my answers to the initial assessment questions. IRL and in small online courses, diagnostic assessments allow for individualization because you can use the information garnered to help direct students towards those materials that will be of most use to them in terms of the gaps in their prior knowledge.

My first MOOC was like the gingerbread house in Hansel and Gretel. It seemed to offer an educational paradise: no-cost, developed and delivered by domain experts (whose “certificate of completion” holds cache), flexible in terms of when and how I completed it, open in terms of whom I would be sharing the experience with. Unfortunately, the reality did not live up to the fantasy. Of course, unlike Hansel and Gretel, I could have left whenever I wished. Instead, I stuck it out to the bitter end, hoping to find some redeeming quality in something that held such promise.

What does this mean for hybrid and fully f2f courses?
We need to continue to figure out how to capitalize on the best aspects of f2f learning and online learning. Some variables remain the same, no matter what the medium of instruction. Boring is boring. Materials and activities need to be intellectually engaging and individualized to the greatest extent possible. Community is essential; students need access to their teacher and their classmates, whether it’s physically or virtually, and some of that contact needs to be synchronous (which is one reason that I think hybrid courses are so effective). Assessment needs to be formative, immediate, and authentic. And no type of assessment can measure engagement. I earned a pretty high score in the Google MOOC, a score that does not reflect the boredom and frustration that I experienced. While I certainly came away from the course with an extended set of Google search skills that I did not posses prior to the course, I’m not sure that I would have  completed the course had I been less motivated (the certificate of completion will help to pad my annual faculty review packet).

How many of our own students have walked away from our courses with A’s or B’s, despite boredom or frustration? If we base the success of our courses on the grades that students come away with, we’re ignoring the aspects of learning that MOOCs make obvious: the hardest working and most motivated students will succeed, no matter how poorly designed the learning experience. So, it’s important for students to have opportunities to share anecdotal feedback, not just at the end of the course, but from the very beginning and throughout the course. And it’s important that we be willing to act on that feedback.

In hindsight, I now recognize that it will be very difficult for designers of MOOCs to do this. In fact, it is difficult for MOOCs to enact most of the learning practices that I value: learning-centered instructional design; a skatepark-like learning environment; immediacy; flexibility; authenticity; hybridity; intimacy with the materials, ideas, and people who make up the body of the course. Instead of heralding MOOCs as the salvation of education, we need to recognize them for what they are: an alternative that works for some learners on some levels. However, it’s also an alternative that is still in its infancy and still has room to grow; in fact, I think that DS106 demonstrates what MOOCs are capable of with the right kind of instructors and objectives. Whether or not they can, as a general rule, get there is up for grabs. What makes DS106 work is that it is, like the best IRL course, a truly student-centered community, in that students develop and help assess the assignments. It’s a course completely devoid of sticks and carrots and completely built on the desire to be a part of a unique learning community.

This ideal of a free and open learning community built upon choice and intrinsic motivation is the real promise of MOOCs. But if we continue, as some institutions and companies do, to look to MOOCs as a vehicle for the mass-production and broad dissemination of canned content, we’ll never get there.

Advertisements

Hacking Assessment: Redesigning the Numbers Game

photo credit: davidfg via photo pin cc

In a recent post, I outlined some ideas that I have about integrating principles of game design into the FYC course. As I pointed out, I’m not all-out gung-ho about the idea of the gamification of education. It turns out that many of my reservations about this latest trend in reforming education are shared by game designers themselves. In her post “Everything Is Game Design,” game designer Elizabeth Sampat makes clear that the assumption that any group of practitioners can co-opt and apply the extremely complex and abstract principles at play in a successfully engaging (to some) game to any other domain is over-reaching:

Gamification” assumes all games share the same mechanics, which means everything that’s gamified is basically the same shitty game. Using badges and leaderboards and offering toothless points for clearly-commercial activities isn’t a magic formula that will engage anyone at any time. Demographics are different, behavior is different . . .

These are the same issues with gamifying the classroom that keep me from wholly embracing the concept. For one, the whole point of a game is that it is . . . well, a game. Games are voluntary. As soon as you force someone to join in a game, it stops being a game for them. It becomes a compulsory activity devoid of intrinsic value and all of the extrinsic rewards you can throw at them, while perhaps artificially increasing their motivation to play the game, cannot turn it back into a game, unless it’s in the negative sense. Even when we gamify a class, we’re still making the learning that takes place within that game compulsory and effectively negating any positive characteristics of gaming that we are attempting to channel. And, as Sampat points out, the characteristics that make any game engaging cannot be standardized. What works for one gamer doesn’t work for another. So, in many ways, game designers face the same kinds of issues and challenges that educators face.

Another point that I think has been largely overlooked in this debate is that, for the large majority of students (if not all), school is already a game. We have goals (behavioral or learning objectives), challenges (in-class activities, homework, exams, and standardized tests), and rewards (grades). We’ve got levels (grade levels based on age in K12 and hours-earned status in college) and leaderboards (A/B honor roll in K12 and President’s and Dean’s lists in college). And we have clearly defined roles (teacher as locus of power and expertise, student as powerless and largely silent novitiate). Some students figure out pretty early how to play the game. In college, these are the students whose identity is inextricably intertwined with their grades. “But I’m an ‘A student,'” they insist when faced with anything other than. Other students learn early on how to game the game. These are the students who know how to manipulate the system and those in charge of it and can often be just as successful at winning the game as their overachieving counterparts. But some students never learn how to play the game according to our rules. Others don’t want to play it because they see it for what it is.

Whether we realize it or not, we’re already playing games with our students. And it’s a numbers game. Play the game according to our rules and we’ll reward you with a high GPA and a diploma, with the promise that these things are the badges you need in order to level up to the American Dream. This kind of game is both irrelevant and counterproductive in a culture that is becoming increasingly participatory, rather than competitive, in nature (just read Share or Die: Voices of the Get Lost Generation in the Age of Crisis to get an idea of how important cooperation and collaboration is becoming for those graduating into the current economy). While many educators are fighting to reform the standardized, hierarchical forms of assessment that have been in place since the industrialization of education, until they are successful at effecting a wholesale paradigm shift and not just applying a false facade and calling it reform, we are forced (much like our students) to try to figure out ways to hack the game. As Sampet argues:

Finding the reward structures and the rules that are already in place, and figuring out how to make them more effective, is the key to making life better for everyone— not adding an additional layer of uninspiring mechanics that push us to engage with mechanics that already suck.

Just as games are not one-size-fits-all, assessment shouldn’t be one-size-fits-all, neither in terms of standardized criteria applied to all students nor evaluative formats used for all courses/disciplines. Just as each course has its own unique set of learning objectives, each course should have a different method for assessing how students go about achieving those objectives. I think it important to explore various assessment methods in an effort to find which is the most effective for a particular course. For example, I have found that a portfolio method is exceptionally well-suited for my composition courses, as it allows for the abstract nature of the writing process and the subjectiveness that characterizes the act of evaluating and valuing a piece of writing. But in trying to incorporate a portfolio system into my speech courses (both an introductory oral communication class and an advanced argumentation and debate class), I have had less success, though for different reasons (perhaps due to the differences among the students: freshman and upper level secondary-education majors, respectively). As much as the portfolio method places value on each student’s individual learning needs, goals, and achievements, within the current grades-based system, students in certain courses need to be able to visualize their learning at both a qualitative and quantitative level. So, what are the alternatives?

Peer Assessment
One option that is gaining ground is peer assessment. Cathy Davidson has successfully explored this method in her “This Is Your Brain on the Internet” class (read “How to Crowdsource Grading” for her description of the process and the thought-provoking debate that followed and “How to Crowdsource Grading: A Report Card” for an overview of her students’ responses to the method). Many MOOCs utilize peer assessment out of necessity. According to Debbie Morrison, within the MOOC environment, peer assessment results in an enhanced learning experience for the student, as grading their peers’ work requires a deeper engagement with course content.

I’ve utilized peer assessment in both of my speech classes to varying degrees and with varying levels of success. In my introductory speech class, the students work together at the beginning of the term to develop a checklist for an effective speech (I don’t use rubrics because, in my experience, they become just another hierarchical form of grading that allows students to retain many of the gaming habits they adopted in K12). They do this by watching several speeches on YouTube and creating individual lists of do’s and don’ts, which we then collate into a master list. For each speech, students are evaluated by five randomly selected anonymous peers, who use the checklist to assess the speech. The students are also filmed and they must use both the video and their peers’ checklists to compose an assessment of their speech that they post to an e-portfolio, along with all artifacts associated with the speech (outlines, bibliographies, slideshows, photos of visual aids, the video of the speech, etc.). For this particular class, I have found that a combination of self and peer assessment has been much more effective than a solely self-based assessment (which tended to be superficial) or even an instructor-based assessment (in which students received only one assessment, as opposed to five, and tended to focus more on improving their “grade” than becoming a more effective speaker). With the peer assessment method, students’ speeches are being evaluated by their audience and their focus becomes oriented towards improving their audience’s response to subsequent speeches.

I have tried this kind of peer assessment in my debate class with far less success. For one, the class is much smaller, and consists, for the most part, of a cohort of sophomore and junior-level secondary education majors. These students tend to be very cliquish and ironically conservative in terms of the practices they expect in the class; they tend to be “A-gamers” obsessed with acing the course and uncomfortable with the level of abstractness and improvisation involved in debate. As a result, they tend to assess their peers over-generously and resist critiquing one another (one class even admitted to giving each other positive assessments across the board because they didn’t want to “hurt someone’s grade”). They look to me as the expert, so their portfolio reflections tend to be focused on flattering me and the course and highlighting aspects of their performances from my point of view (“If I were the instructor, I would give this speech a [insert grade here]”). Despite my best efforts, these students are resistant to assessment formats that are not instructor-based. So what’s a disruptive pedagogue to do?

Contract Grading
While I was at first dismissive of contract grading based on the distaste I harbor for the artificially hierarchical nature of any type of grades-based assessment (and the name’s implications of a kind of capitalistic supply and demand relationship between student and teacher), I have become less dismissive of the method in terms of its ability to bridge the gap between my students’ need for a quantitative value to be placed on their learning and my own objective of encouraging them to recognize and become complicit in the qualitative value of that learning.

For one, I’m hoping that it will eliminate the specter of grades that haunts the course by directly addressing the students’ anxiety regarding their status in a course that has no exams or other easily quantifiable activities. Students will decide what grade they wish to work towards and will have a specific, objective set of criteria that they must achieve in order to earn that grade (yes, I know this sounds just like a syllabus with a traditional grading schema, but contract grading makes the implicit aspects of the traditional schema explicit and, in many ways, mimics the game design principle of starting at zero and gaining points as you go). Once the question of grades is out of the way, perhaps the students will be more willing to focus on learning and improving.

Secondly, contract grading requires student input in regards to the challenges that must be met in order to level-up (yes, I know I’m wading back into gaming territory, but, as I’ve argued, our goal should be figuring out what works for a particular course and cohort of students rather than a wholesale dismissal or acceptance of any one method or theory). Often, in order to earn an A or a B, students must complete additional learning tasks, sometimes choosing between several options, which they can be invited to develop. This aspect of contract grading is the one that I find most promising in terms of encouraging student investment in the learning environment. While I have long preached to students that, in the words of Lennon and McCartney, “in the end, the love you take is equal to the love you make,” contract grading makes student-centered initiative an explicitly integral component of the course.

Thirdly, contract grading will allow me to both address the students’ insistence that I fulfill the role of expert assessor and my wish for them to fulfill the role of deliberate and reflective practitioner. Different grades require different levels of mastery, so students who contract for a certain grade must revise and/or re-attempt assignments that don’t demonstrate mastery. While my debate students can’t re-do a live debate, they can complete a video re-enactment that improves upon their live performance or record a play-by-play self-critique using Voice Thread or screencasting software. In addition, some of the optional assignments can require peer or self-assessment or other types of reflective learning practices.

While I’m not completely comfortable with contract grading (just as I am not completely comfortable with gamification), I also recognize that other assessment methods are not working for my upperclassman and, as a result, are interfering with my efforts to push them beyond a superficial engagement with their learning in the course. I believe firmly that we must recognize our students’ needs, values, and histories; but we can’t pick and choose which of those we take into consideration when designing their learning environments. Sampet makes a point that I think is important for us to keep in mind in the process:

The core principle to remember is that game design is everywhere. Instead of trying to stick a crappy, half-formed game onto real life, the real challenge— the one that’s tough, the one that will bring the greatest results— is to fix the bad game design that’s all around us.

Students won’t be open to assessment that values quality over quantity or process over product until we recognize that our current assessment paradigm is a badly designed game that needs to be torn down and redesigned. Sampet suggests two questions to ask when considering whether or not something is badly designed:

  • What’s supposed to be the goal here?
  • Is this experience set up to help or hinder my ability to reach that goal?
I’m game.
Resources on Contract Grading
These are the sources that I consulted to help me to better understand the possibilities afforded by contract grading:


Building a Better Blogging Assignment Redux

photo credit: Mike Licht, NotionsCapital.com via photo pin cc

One of the sessions at last week’s THATCamp dealt with the issue of designing a better model of student blogging. You can view my Storify of the session here.

I thought that I would add some of my own ideas and discuss how I address some of the issues raised during the session (since, unfortunately, I couldn’t be there).

As noted on the session’s Google Doc, a major problem with requiring students to blog is that the large majority of them are unfamiliar with blogs, so we need to identify effective methods for acculturating them to the genre. Since I’m an advocate of immersive learning, I’ve found that many students begin to “get” blogging by spending a good deal of time actually doing it. But I’ve developed a few orientation assignments that help them get off to a good start.

  • Require students to locate, deconstruct, assess, and subscribe to blogs on topics that interest them: As homework during the first week of class, I have students locate several blogs on a topic that they’re interested in. They pick the best three and subscribe to them. While exploring blogs on their topic, they create a list of criteria for an effective blog. We use a class meeting to collate their criteria into a master list that they can then use as a checklist for their own blogs. Next term I’m planning to expand this assignment by having students work together to deconstruct a blog.
  • Teach them how to comment: This is something that I still struggle with. I provide students with several resources on commenting, including those mentioned at the session; nonetheless, many of them provide largely superficial comments. Next term I plan to have students read and assess comments on the blogs they’ve subscribed to and add their own comments. Similarly to the assignment above, students will work together to establish criteria for effective commenting.

A second, and equally important issue, is the logistics of blog management, both for yourself and the students: controlling pacing (so that you don’t have to deal with an influx of posts and comments at the last minute), encouraging engagement with the blogs (both their own and their peers’), and assessing the blogs.

  • Establish submission guidelines (and stick to them): I establish strict deadlines for post submissions and stick to them from the very first post. I generally make the deadline the night before class in the case of totally face-to-face courses. For my hybrid courses, the deadline is on the day that we do not meet. Either way, I set the deadline for a time well before I and other students need to access the blogs.
  • Encourage engagement with peers’ blogs: I require that students subscribe to each others’ blogs and read and comment on a certain number of them each week. I’ve tried to encourage more depth to their comments by staggering the due dates for posts and comments (generally they have 12-24 hours after the blog post deadline to read and respond to peers’ posts). I’ve had even better success this past term with combining this with rotating students’ roles between posters and readers/commenters. This allows them to fully focus on and engage in their role. This method requires reducing the number and frequency of posts for each student, but I think that the pay-off will be worth it, especially by placing as much emphasis on their comments on others’ blogs as on their own blog posts (which means that I’ll have to invest more time into assessing their comments somehow).
  • Make the blogs an integral component of the course: I try to immerse students in their blogs as much as possible because I’ve found that the more they blog, the better bloggers they become. I now require that all of their writing be done on their blog and I ask them to blog and comment on blogs as frequently as possible (at least once a week). I think that it’s a major mistake to have students blog but then not integrate the blogs into the classroom interactions in some way; this encourages students to view the blogs as secondary to the other class work. In my literature courses, the students’ blogs become the fulcrum for the class discussions. I encourage students to pick the most thought-provoking for us to look at together in class. In my FYC courses, I pick one model post each week for us to critique as a class, asking students to assess the post in small groups, looking for reasons why I selected the post as being a good model. Since the class uses Google+ as a virtual learning space, I also “plus 1” those posts that are especially thought-provoking, well written, and/or visually appealing (I encourage students to do this, as well); this provides students with almost instantaneous feedback and encourages those who might not have read and/or commented on the posts to do so. This also results in a type of gamification of the blogs, as some students begin to work to earn “plus 1’s” from me and their peers. Next term, I plan to also encourage students to use other social media to promote and “like” their peers’ posts.
  • Involve students in the assessment of their blogs: In a previous post, I outlined how I require students to self-assess their writing. I have been happy with the way I’ve asked students to create a portfolio of their blog posts to submit to me at the end of term, rather than assigning a grade to each individual blog post (I’ve tried to eliminate traditional grades as much as possible in my classes). Normally, I have students do this via a final assessment form that they fill in and submit to me via email, hyperlinking to specific posts that they want to include in their assessment, and discussing in detail why they selected them and how they demonstrate what they’ve learned about writing. But I’m considering remixing Mark Sample’s idea of a blog audit; I think that making their reflections public on their blogs will encourage an even deeper consideration of who they are as writers and what they’ve done as bloggers over the course of the term, mirroring the way that many bloggers use their blogs as reflective spaces. I also like his idea of having students revisit and revise some of their old posts, which is something I used to encourage students to do with their writing before I switched to blogs, and would like to re-incorporate into their portfolio creation.
  • Utilize formative and peer assessment: This is still something that I’m tweaking. So far, I’ve found my method for providing formative assessment effective (and students have indicated the same). What I haven’t been able to integrate as effectively is peer assessment. I would love to use a badge system, like Mozilla’s Open Badges, but I haven’t had the time to figure out the best way to do so (or if it’s even possible, since I don’t know how to code or if it’s necessary to know how to do so to use the program, two issues I’m hoping to remedy soon). In the meantime, I’ll encourage the use of readily available social media feedback systems such as Facebook’s “like” and Google’s “plus 1” buttons.

A third issue that seems to have been prevalent during the session is that of how to allow for disruption and alternatives within the blogging domain.

  • Allow/encourage alternative uses for blogs: Since I require that students publish all of their writings for the class to their blog, this means that sometimes their blog posts contain nontraditional material (although I always try to help students understand that, with the advent of photoblogs, vlogs, and podcasting, there is no longer such a thing as traditional blog content). For example, this term I’m requiring my FYC students to use Storify to create their annotated bibliographies and then embed their stories into their blogs for comment by me and their peers. Last term, my students participated in DS 106, which meant that their blogs became populated with memes, mashups, animated gifs, and sound clouds.
  • Disrupt the digital environment: Interestingly enough, as participants were discussing Mills Kelly’s ideas about disruptive pedagogies and then subsequently considering ways to disrupt student blogging, I was blogging about Paul Fyfe’s theory of teaching naked and considering how to disrupt the digital environments within which I ask my students to work. One idea that I blogged about that serendipitously showed up on the blogging session Google Doc is that of requiring students to engage with and use their blog posts in non-digital ways. I think that this is an aspect of student blogging that needs more attention and I hope that a conversation can develop around it.

These are just a few of the blogging methods that I have found effective and, as indicated, I’m still working at improving some of them. I encourage those who require their students to blog or who are thinking of doing so to help continue the conversation here, on my Storify of the THATCamp session, on Mark Sample’s THATCamp blog post, or on Twitter (use the #thatcamp hashtag).

How to Teach 150 Years of Literature in Four Weeks (without Drilling and Killing Your Students)

photo credit: IronRodArt – Royce Bair (NightScapes on Thursdays) via photo pin cc

I just finished up my summer short-term American literature survey course. It’s a grueling ordeal, for instructor and students alike. My course covered the late-19th century to the present day. I had to provide a foundational knowledge in this period of American literature in a matter of four weeks, which meant meeting for two and a half hours four days a week. Normally, this type of accelerated literature survey course is handled via a brutal schedule of lecturing for the entire class period with a ten minute break at a convenient stopping point. Some instructors give a midterm two weeks in, while others just give a final on the last day of class. I don’t know why students voluntarily subject themselves to this, but they do, in droves, every summer.

The problem, for me, is that I no longer believe that the lecture model is an effective way for students to learn, and it certainly is not an effective way for them to engage with the texts they’re being asked to read. I suppose this is why so many students choose summer survey courses; under the lecture model they don’t even really need to do the enormous amount of reading required because they can show up, try to copy down everything the lecturer says, and then regurgitate it on an exam, and, vóila, they’ve dispensed with having to do any kind of real learning or thinking about the texts themselves because the instructor has pre-digested them for them (because of the short nature of the course and the inordinate amount of reading required, there are generally no research papers or other types of projects). So, the problem that I faced was how to make such a course an active learning environment.

Firstly, I had to re-think how to best use the two and a half hour class meetings. While a discussion-based model was the most obvious method for putting more onus on the students, I couldn’t expect them to carry a discussion for the whole period. Plus, I’ve been teaching long enough to know that students often need coaxing and cajoling when it comes to in-class discussion; they’re not ordinarily forthcoming with ideas and arguments about the texts they’ve been asked to read. So, I also had to consider how best to use their out-of-class time to help them prepare for in-class discussions.

One of the best methods that I’ve used for encouraging in-class discussion is to have students begin thinking and talking about the texts before class meets. Since the class was small (ten students), I thought that a class blog would be the best way to do this, so I set up one on WordPress and gave each student authoring status. I had to spend some in-class time at the beginning of the course showing them how to use WordPress, but everyone caught on pretty quickly. I then divided the class up into four groups (one for each meeting of the week) and made each group responsible for both blogging about their assigned readings and leading the in-class discussion about them. On days when they weren’t responsible for blogging, they were required to read and comment on their peers’ posts. If the class had been larger (it caps at 30), I think Twitter would have been a better option than a blog.

In addition to encouraging the class to begin thinking about and discussing the texts before arriving in the classroom, once in the classroom we repositioned the desks into a circle (which I joined) so that we were all facing each other and everyone was accountable for contributing to the discussion. At first, to discourage the class from relying on me to facilitate the discussion, I refrained from speaking for the first 20 minutes (this was harder than I thought it would be, especially when students said things that I found to be thought-provoking); once I was certain that the class no longer looked to me as arbiter of the discussion, I reduced my enforced silence to 10 minutes.

Since I was asking students to begin to engage with the texts via the class blog, I decided to make the workload less onerous by approaching the content via an uncoverage, rather than coverage, model, focusing on depth rather than breadth. I decided that two to three short stories or two to three poets per class meeting was a good number. But I still had the task of reducing three volumes of literature down to thirteen or so authors. I considered allowing the class to select the texts, but decided that the brief nature of the course precluded this option, so I decided that a thematic approach would help make the process of elimination and selection easier. I settled on the theme of “outsiders, outcasts, outlaws, and anti-hero(in)es” and selected texts accordingly, making sure to have representative texts for all of the major literary movements that the course objectives required me to cover. I also decided to make the course as visually interesting as possible by including film adaptations of some of the texts as long as the films themselves had artistic merit (I ended up selecting A Streetcar Named Desire, Slaughthouse-five, The Outlaw Josey Wales, Smoke Signals, and Fight Club).

In addition to requiring that the students initiate and facilitate discussions of the texts, I also asked them to take an active role in how they would be assessed. We spent part of an early class meeting discussing and establishing guidelines and criteria for the blog posts and comments and the in-class discussions. And I asked the class to create the final exam by submitting potential exam questions each week; these questions were to be developed based on the discussions of the texts that took place both on the blog and in the classroom, as well as the mini-lectures that I gave periodically to help students understand the historical and cultural background of various texts and the major literary movements that they were a part of. I tried to keep these lectures as short as possible and used visual media to give students a sense of the time period and events within which the texts were situated. After providing some basic background information, I then asked the students to work together to develop lists of the themes and characteristics of the particular literary movement under discussion, using the stories, poems, and films that they had read to help them to do so. In terms of the exam questions that they developed using our discussions and these lectures, the class agreed that there should be a balance of closed and open short-answer questions, so I had them post their questions to a Google spreadsheet to ensure that students didn’t use the same questions and that each of them submitted the appropriate types of questions. What I found was that the students developed exam questions that were very similar to what I would have developed myself.

Despite our limited time frame, I wanted students to see the relevancy of the texts and the issues we were discussing to contemporary issues and their own cultural environments, so I required a capstone project in which students selected a text (either a story or poem, film, music video, or song/album) that represented a character or group that they saw as being outsiders, outcasts, outlaws, and/or anti-hero(in)es and then create a multimedia presentation with the purpose of teaching the class about their selected text and how it fit in with our theme. This required dedicating an entire class meeting to the capstone presentations, but I think that the sacrifice was worth it, as I think it forced the students to apply what they had learned over the term to a text that they normally would not have thought of as requiring or benefiting from a closer analysis and I think that the class, as a whole, enjoyed seeing their peers’ interests and being exposed to texts that they normally would not have been. The selection was varied, from Twilight to Butch Cassidy and the Sundance Kid to The Help and from “Kick Push” by Lupe Fiasco to “Poncho and Lefty” by Willie Nelson and Merle Haggard.

I think that, overall, the course was a success. While some students clearly put more effort into their blog posts than others, and some students still did not fully engage in classroom discussions due to shyness, I think that all of the students took something away from the course and were forced to engage with the texts in an active way. The biggest surprise for me came on the last day of class, the day of the final. When I walked into class, the first thing that students wanted to know was if they could take the final as a class. I was puzzled by the question and asked why they wanted to take the final as a class. The response was equally surprising: they had worked together as a class all term to understand, analyze, and situate the texts; to them, it seemed only natural that they would demonstrate what they had learned as a class as a class. I expressed concern about the method for assessing them if they took the final as a class. But they had that figured out as well: they would sit in a circle and take turns answering the questions on the exam orally; if they couldn’t answer the question, they could pass on it and go to the next, but they could only pass on a question once without it counting against them. If they could not answer the question completely on their own, then their peers could offer them help by posing additional questions. Originally, I had pooled 30 questions for their exam and given them the option of selecting 15 of those questions to answer. Using the students’ collaborative method, all 30 questions were answered during the final exam and every question was answered correctly by someone, with only three students needing to use their pass and only two having to pass more than once. What’s more, the class was able to elaborate on answers and point to connections between texts that they had not noticed or had time to discuss during our previous meetings. So, using the students’ collaborative method of taking the final exam turned out to be both a way for me to assess their learning and a way for them to continue to learn more about the texts and issues we had uncovered during the term.

I don’t know that this would work with every literature survey class. It may be that this one was blessed by serendipity: that I had the right number of the right kinds of students. I don’t know that I could have done this with 30 students. I don’t know that I could have done this with students less motivated or engaged with the content. But I do know that I will never go back to the way I used to teach my literature survey courses. And I will never forget the semester when my students sat in a circle and answered every question that I threw at them and supported and helped each other to think harder and remember more and make connections they would not have made on their own.

If you’re interested, you can view the class blog here. And I’m interested in what you think about the course, especially the idea of collaborative exams.

This Is What a Final Exam Should Look Like

image by freeimageslive.co.uk – gratuit

I used to give traditional final exams, even in my First-Year Composition course. Every semester during finals week, my FYC students would sit in the classroom for two hours writing an essay. Supposedly, this was an exercise in assessment: by composing a full-length essay in class, students were demonstrating what they had learned about writing that term. But I began, a few years ago, to question just what this exercise proved–how much did writing an in-class essay show about student learning?

First of all, it seemed to be a contradictory assignment. I had spent an entire semester trying to convince students to spend time developing their essays–to let their drafts rest for a while before revising them; to proofread carefully, looking for one type of error at a time; to let others read their writing and provide feedback on it to help them see it through their readers’ eyes. And then, for their final, heavily-weighted piece of writing, I asked them to throw all of that advice out of the window and write an essay, from start to finish, in two hours, with no peer review and little time to revise or proofread. Did I have a legitimate pedagogical reason for doing this? To be honest, the answer is No. I did it because it was what everyone else was doing and it was what I was told I should do, as well. As recently pointed out by David Jaffe in “Stop Telling Students to Study for Exams,” traditional final exams are grounded more in tradition than they are in good pedagogy or the realities of how we want students to learn and what we claim we want students to take away from our classes (and college).

So, I began to re-think how best to use the two hour final exam meeting and to try out different ways of capping off the term with a demonstration of student learning. One thing that always bugged me about classes that required a research project (which, as an English student, was all of them) was the fact that no one, other than the professor, ever got to see the results of all my efforts. And I was always curious about what my classmates were researching and what the results were. Sometimes we’d talk about our projects in process, but the paper itself was usually due on the day of the final, so there was never any kind of whole-class plenary discussion of the topics and issues we had all been immersed in all term. What happened in our research papers stayed in our research papers. Using this experience as an example of how not to make the research project relevant to the course and to the students, I’ve experimented over the years with various ways for students to use the final exam meeting to share their research projects and what they have learned over the course of the semester with the class.

One semester, we had our own mini-symposium (mimicking our university’s annual student research symposium, which I had asked my students to attend that term), with students simply reading their research essays out loud at the podium. And as I’ve discussed in another post, I recently asked my first hybrid FYC class to turn their research projects into multi-media presentations, requiring them to articulate their written ideas in multi-modal rhetorical mediums. I think that both of these were fairly successful methods of asking students to demonstrate their learning while making that learning relevant to the course as a whole and encouraging students to take pride in the work they had done. But I wasn’t completely happy with either, as they both encouraged a kind of passivity on the part of the student’s audience, including myself.

Then, the other day this video popped up in my Twitter stream and, for me, it was an epiphanic moment:

I couldn’t help but think about what classrooms look and sound like during a traditional final exam. And even what my own final exams look and sound like, i.e., a sage on a stage (even though I’ve shifted the locus of power slightly by placing the students on the stage) with a glassy-eyed audience who typically respond with silence when asked if they have any questions. And I couldn’t help but to compare those classrooms to the one in the video (you can read more about the research slam at “The Unconference Strikes Back”).

What if final exams looked more like this? What if students shared their learning with one another in the kind of interactive, experiential, small-group method encouraged by the research slam? And what if I could join those moving from group to group, listening to (and perhaps even videoing) them engage in conversations about their learning? What if I asked them to post those videos on their blogs so that anyone could see them sharing and answering questions about their learning?

How powerful would that be?

Pretty powerful, I think. I’ll let you know how it turns out.

The Role of Self-Assessment in Deliberate Practice

photo credit: dkuropatwa via photo pin cc

In my last post, I discussed the need for students to engage in deliberate practice. I think that this is especially true in First-Year Composition courses. For one thing, I’m not sure that we can really teach students how to write. I think we can give them some best practices to follow and show them models of good writing, but writing is one of those skills that you can only learn by doing. And writing, especially academic writing, is a complex skill that takes years to develop. And I only have 14 weeks ( or in the case of my current Summer short-term class, eight weeks).

The other problem that we face in the FYC classroom is the fact that our students come to us with such varied abilities and backgrounds in writing instruction. Some have had little instruction in writing or, if they have, it was poor instruction because they struggle to write coherent sentences and put them together in a logically-organized paragraph. Some have had intense instruction in a very structured form of writing (the old five paragraph, 5-7 sentences per paragraph, keyhole style of essay) that works well on standardized exams but does not allow for the varied disciplinary styles that they will be asked to tackle in college. Some have had their heads filled with a lot of bullshit do’s and don’ts: don’t start a sentence with and or but; don’t use the first-person pronoun; always start your essay with a catchy hook, preferably something that sounds cosmically philosophical; always place your thesis at the very end of your introductory paragraph. I always have my FYC students read a chapter from Surviving Freshman Composition by Scott Edelstein called “The Truth about Freshman Composition” because it does an excellent job of explaining the differences between the kind of writing instruction they received in high school and the kind that they will (hopefully) receive in college and it also dispels a lot of the writing myths that they almost certainly have been taught. Students are always surprised and sometimes even angry that so much of what they were taught in high school has not prepared them for writing in college and, in some cases, was just plain wrong. So I spend quite a bit of time forcing students to unlearn bad writing habits and learn new ones, only the new ones I ask them to learn deal less with how to write and more with how to think about what they’re writing and how to assess how well it accomplishes their purposes. I provide them with lots and lots of chances to deliberately practice writing an academic essay, and with each practice, I ask them to assess what went well and what didn’t go so well and what they need to focus on improving on in their next practice. Here’s my method for doing so.

Even though my students will eventually publish their essays on their blogs, I have them type them up in Word or Open Office first. For one thing, Word will catch some of the more blatant typos and grammar errors that wouldn’t be caught if they were composing within the Blogger dashboard. And if I happen to be using peer review that semester (some semesters I do, some I don’t), I always have them do so from hard copies, which are much easier to print out and read from Word. Some of the newer versions of Word even have a blogging template that will allow students to easily type their posts up in Word and then publish them to their blog.

Another pro of having students initially type their blog posts in Word is that I can have the students highlight and annotate their essays using Word’s commenting tool (I’ve tried Google Docs, but the commenting tool does not allow for the kind of detail that I need when providing my own annotations). I ask students to highlight and comment on any parts of the essay that they have questions/concerns about and to use the commenting tool to communicate their questions/concerns to me so that I can address them. Students rarely take me up on this offer, but some do, so I continue to encourage them to do so. But the real purpose of the Word version of their post is provide them the space to answer five questions that require them to assess the essay. The questions vary from semester to semester, depending on if I’m using peer review or if I’m focusing more on writing process or revision, but they always have the same goal: to encourage the student to reflect on their writing using their own judgement and valuation, rather than waiting for me to pass judgement on the piece’s value. Here’s the five questions I had students answer last semester:

  1. What do you think is working well in this blog post?
  2. What do you think is not working well in this blog post?
  3. What challenged you the most about this blog post and how did you overcome the challenge? If you didn’t overcome it, how will you deal with this challenge the next time?
  4. How successful were you in addressing the weakness that you and/or I identified in you last blog post?
  5. Do you have any questions for me?

The three questions that, to me, are the most essential are 1 (because I think it’s just as important that they be able to recognize strengths as weaknesses), 2, and 4.

After reading and annotating the student’s essay using Richard Haswell’s minimal marking method, I then focus my feedback on their answers to these questions. Sometimes, in the case of a student who is not adept at assessing their own writing, my feedback focuses on correcting their misconceptions about their writing. This past semester, for example, I had a student who was extremely resistant to self-assessment and refused to admit that there were weaknesses in her writing, so I spent my initial feedback efforts in trying to convince her of the necessity of taking an honest look at her writing; eventually, my frustration with her resistance got the better of me and I dedicated all of my feedback to listing all of the weaknesses in her essay (needless to say, her response was less than positive; she complained to the Department Chair about how mean and uncaring I was because I was constantly criticizing her writing). For those students who are more open to self-assessment and are, consequently, much better at honestly evaluating their writing, my feedback efforts are focused on providing tips and links to resources that will help them address their weaknesses. When one student expressed a dissatisfaction with her rough drafts, I suggested that she read Anne Lemott’s “Shitty First Drafts.” On the next self-assessment, the student thanked me for suggesting that she read it and said that it helped her out tremendously. She then suggested it to another student in her comments on a blog post in which they expressed frustration with the invention stage of the writing process.

Not all students act on my recommendations and even fewer pass them along to their peers, but at least their self-assessments provide a dialogue that is not encouraged in traditional, instructor-centered summative assessment models. And this dialogue continues throughout the semester, as students use their previous self-assessments and my feedback on them to answer the next. This dialogue culminates in the writing portfolio that students submit at the end of the term. In putting together their portfolios, students have a semester’s worth of assessments that provide a narrative map of their progress as writers. They can use these narratives to select representative pieces of writing and write their final self-assessment. But it’s only final in terms of that particular class. For the portfolio, I ask them to identify aspects of their writing that they still see as weaknesses and to discuss how they plan to continue to deliberately practice at eliminating those weaknesses from their writing.

photo credit: giulia.forsythe via photo pin cc

Because they have been conditioned by their K12 education to see the teacher as the sole authority in evaluating and valuing their learning, some students need guidance in assessing their own writing and a small minority will be resistant to doing so. But for those students who are willing to learn how to do so, self-assessment can mean much more productive practice and, based on my observations, results in more meaningful learning than that experienced by students who depend solely on their instructor’s summative evaluations. This past semester, I asked my two FYC classes to anonymously respond to a midterm course evaluation. One of the questions asked them what aspect of the course had helped them to improve their writing the most, and the majority of students indicated that the self-assessments had been one of the most helpful aspects of the course (second only to blogging). Here’s a few examples of students’ feedback on the self-assessments:

  • Having to specifically address issues in our writing through our [self-assessments] has helped me out immensely.
  • The instructor commenting on my writing and telling me how I can improve. FEEDBACK from the instructor helps a lot.
  • I really like how helpful you have been. I really like the [self-assessments] we get back each week.
  • I love the [self-assessments]. They help me. ALOT.

As an instructor who often struggles with doubts about the impact I am having on my students’ writing, that ALOT, though misspelled, really means A LOT.

By the way, if you’re wondering about why I needed to change the wording on the feedback, I don’t call the question sets self-assessments. I call them process memos or revision memos (again, depending on the focus of the course and the questions I’m asking them to answer). So, my students may not even realize that they’re engaging in the grading process (I don’t like the term grade, but that’s their only frame of reference and that’s what I’m ultimately required to do to their writing). And I’m not sure that it would be a good idea to muddy the water by telling them this. I don’t want them to start saying things like “I’d rate this essay at a B,” like their writing is an egg that met certain interior and exterior qualities at the time it was packaged.

My student self-assessment system is, as is everything I do as an instructor, a work in progress. There may be better questions that I can ask. And I’m not sure I’m very good at  teaching students how to evaluate their own writing. I’m interested in how others ask their students to assess their own learning and how they guide them in doing so. Please share your tips and experiences. How can we encourage students to assess themselves and be less dependent upon us as arbiters of their learning?