All Together Now: Some Further Uses for Google Docs in the Composition Classroom

photo credit: KatieTT via photo pin cc

It took me a long time to become a Google Docs convert. I played around with the app as a tool for collaboration in an upper-level course one term and it was a total disaster, mainly because students didn’t know how to use it (and neither did I, really) and we often ran into issues when students attempted to access documents that I had shared with them (I think this had much to do with Google Docs’ bugginess at the time). I subsequently used Docs only when needing to access a document that had been shared publicly, and in doing so, began to see the utility in creating certain documents in the app so that I could hyperlink to and even embed them on a class website or whatever social media tool the class happened to be using.

The collaborative magic of Google Docs did not really appeal to me until I was forced to use the app to collaboratively edit an article that I had submitted to Hybrid Pedagogy. After submitting the draft of the article, the editors, Jesse Stommel and Pete Rorabaugh, provided me with feedback via the commenting feature and then Pete and I used the in-document chat feature to discuss how best to integrate their ideas with mine. As I worked to revise the document, Pete (virtually) worked alongside me, serving as both sounding-board and devil’s advocate and providing me with synchronous feedback on my revisions. It was an eye-opening experience, not just because I was unaware of many of the tools available in Google Docs (such as the revision history feature and the chat tool), but because of how powerfully the act of collaboratively revising a piece of writing affected me. I had always wrote alone, in isolation, never with someone looking over my shoulder and certainly never engaging in a dialogue about my rhetorical choices (and possible alternatives) as I was making them.

If writing collaboratively had such an impact on my writing, I began to wonder what kind of impact it could have on my students’ writing. So I began to consider how I could use this powerful tool that I had been poo-pooing for years as a weapon against the isolation, anxiety, and despair that I so often see plaguing my First-Year Composition students.

I know that there’s been a lot written about the value and utility of Google Docs in the classroom, so I won’t bore you with a rehashing of what others have already so effectively said. ProfHacker has written quite a bit about the app and their post “GoogleDocs and Collaboration in the Classroom” is chock-full of links to various tips and useful ideas. Getting Smart’s “6 Powerful Google Docs Features to Support the Collaborative Writing Process” provides an excellent step-by-step guide to using Google Docs especially for collaborative writing. And for a basic overview of Google Docs’ features and potential uses, you can browse through this slideshow:

 

By no means have I explored the full potential of Google Docs. But I would like to share a few strategies that I’m trying out in my Basic English Skills class this term that seem to be having an especially powerful impact on  my students’ writing.

Daily Journals

I’ve always used journals in my literature and writing classes, whether they were reading journals, learning journals, or writers’ journals, because I believe that the most powerful thing we can teach our students is how to be more “meta.” But there are several problems with student journals. The main problem is accessibility because I honestly never enjoyed lugging around armfuls of composition books, 3-ring binders, and plastic folders (or whatever else students had handy to stuff their hastily-thrown-together-at-the-last-minute “daily” journal into). Which brings me to the other problem. Since it was logistically impossible to check journals every day, I would usually take them up three or four times a semester, which meant that students could very well wait until the last minute to write all of their journal entries (but ingeniously writing each entry in a different color ink to disguise their act of subterfuge). This also meant that students were without their journals for the few days in which it took me to read and record their entries.

These are the reasons why I became an early adopter of student blogging. By having students blog instead of keeping analog journals, I could monitor their entries (and when they were doing them) without inconvenience to the students or myself. But students are sometimes hesitant about or resistant to making such informal, and often intimately personal, writing public. So, this term I have asked my Basic English Skills students to keep a daily journal (which can be on anything they wish to write about and functions to help them build their writing muscles) in Google Docs, which they’ve only shared with me. Besides alleviating any anxiety students might have felt about making their journals public, Google Docs allows me to easily monitor new entries (whenever a Doc is edited, the title turns bold) and to verify when students are completing their entries (by using the revision history feature). Aside from how much easier it now is to ask students to keep journals, I’m also enjoying reading their journals and learning more about their lives outside of the classroom (many of which are filled with challenges and struggles that often leave me in tears and/or feeling extremely blessed).

Writing in Teams

The sources that I referenced above have already pointed out the benefits of using Google Docs during the brainstorming and peer review processes. But I wanted to attempt to channel some of the power of those collaborative writing sessions that I shared with Pete Rorabaugh to help alleviate some of the angst that many of the students in a remedial writing class experience as they work their way through the entire writing process. So, I decided to have the students write in teams of three, with one team member serving as lead editor each week. The lead editor is in charge of each week’s blog post, which includes coming up with a focus question and locating 2-3 sources to help them answer their question, which they share with their team before the week’s first class meeting (I have had the teams indicate each week’s lead editor in a spreadsheet in Google Docs so that I am aware of which students are in charge each week).

But it gets really interesting when the teams come together in the week’s first class meeting. The lead editor creates a Google Doc, which they share with their team and me, and type in their focus question and a brief summary of how they plan to answer it. What follows is a 30-40 minute session in which the team discusses the question, the lead editor’s sources, and their plan for answering the question completely in writing in the Google Doc, observing a strict rule of silence (I adapted this activity from Lawrence Weinstein’s “Silent Dialogue” activity in Writing Doesn’t Have to Be Lonely). The purpose of this activity is to force the team to flesh out the lead editor’s ideas and to communicate all of their ideas in written form. This is beneficial for the lead editor because it provides them with sounding-boards and devil’s advocates and by the time they leave class, they have a much better grasp on what it is they want to say and how best to say it. It also benefits the other team members because it gives them more practice in expressing their ideas in writing. And it allows me to monitor the team’s work and provide my own feedback early in the writing process before the lead editor begins writing a draft that might be too ambitious in scope.

Aside from the pedagogical functions of the collaborate brainstorming session, the human factor becomes more obvious and explicit (a factor that, unfortunately, we as teachers often forget about). The docs lay bare the students’ hesitancies, their false starts, their doubts, their over-shootings, their assumptions, their candor, their egos, their camaraderie, and their humor. Here’s an example of one team’s silent dialogue session:

The next step in the process is for the lead editor to come to the next class meeting with a rough draft that they share with their team and me. The team then begins the process of revising, proofreading and editing, and designing the blog post. Again, I can use the revision history feature to monitor the transformation of the draft, verify that all team members are contributing, and provide feedback on the effectiveness of their work. All in all, this aspect of the collaborative writing model has been successful because of the synchronous access that Google Docs allows me to have to the students’ writing process, and I’m not sure that it would be as successful without it.

What I think I see as I read through the teams’ weekly brainstorming and collaborative writing sessions is a sense that they are not alone, that they have peers who are capable of helping them and who are invested in their writing as much as they are their own.

What a powerful thing for students to feel.

And while I can’t say with 100% certainty that the writing that is being produced would not have been as good if the students were not using Google Docs, I’m so confident that it is that I’ll be putting it to the test in my regular FYC classes next term.

 

Loitering in the Witch’s House: My MOOC Experience

photo credit: perpetualplum via photo pin cc

Whether you love Google or hate it, there’s no denying the fact that the company is at the leading edge of open source apps and educational resources. And whether we like it or not, the majority of students are using Google as their primary research tool (and, according to a study summarized by Sarah Kessler, they’re not using it very effectively). I use Google apps extensively in my hybrid courses and, recognizing a need on my students’ part to learn how to use the internet more effectively and critically, I’ve begun to integrate the Google search engine into my research workshops. So when Google recently offered a MOOC entitled “Power Searching with Google,” I immediately signed up, hoping in the process to kill two birds with one stone: 1) to learn some Google search strategies that I could pass along to my students, and 2) to get a taste for the MOOC experience. It was a mixed bag.

Set-up
In terms of set-up, the course was very straightforward. Lessons consisted of video demonstrations followed by activities designed to test your ability to apply the skills addressed in each video. Assessment consisted of a pre-course assessment (meant to gauge existing knowledge of Google search features), a mid-course assessment, and a final assessment. The scores for the mid-course and final assessments were averaged together to determine your “grade” for the course and a passing grade resulted in a certificate of completion. There was also a discussion forum that you could voluntarily participate in.

Pros
1) Individualized pace: While there were deadlines for the mid-course and final assessments, you could work through the course materials at your own pace as long as you were ready to meet those deadlines. This worked great for me because I could complete individual lessons or entire units as it suited me. Considering the hectic schedule I have this summer, this was by far the most effective aspect of the course for me.

2) Paced release of materials: While I could work at my own pace on the materials available to me, I was limited by the fact that the units were released at a graduated rate. This actually turned out to be a positive for me because, since I couldn’t see the entirety of the course materials at the beginning, I wasn’t overwhelmed by the amount of material I would need to cover and I remained focused on each set of materials I had access to.

3) Do-overs: Both practice activities and assessments were set up to allow multiple attempts at answering questions correctly. You could check your answers before submitting your assessments and wrong answers to practice activities usually triggered some feedback in terms of what to review in order to better understand the skill addressed in the activity. I found this to be a very effective method for learning because I didn’t have a fear of failure hanging over me that a single-attempt set-up would have created.

4) Leveling up or down: While I didn’t actually make use of it, there was the option to change the difficulty level of practice activities to either an easier activity or a harder activity. Again, I see this as being an effective method for individualizing assessment. There was also an option to skip activities and see the correct answers. This was effective for those search functions that I was already familiar with and didn’t necessarily want to waste my time trying out; being able to see the answers allowed me to self-assess my prior knowledge and move forward quickly if I wanted to.

Cons
1) Boring videos: I don’t expect lecture and demonstrations to be entertaining, but I do expect them to be somewhat engaging on an intellectual level. The videos were not long (the longest was a little over eight minutes), and this brevity was their only saving grace. It wasn’t just the fact that the instructor sat on a couch the whole time (I suppose in an effort to make the instruction feel more personal), but the content itself dragged in several lessons. Some lessons were far too simplistic and some were overly repetitive. A boring presenter is boring, whether IRL or on video.

2) Google Chrome required: All demonstrations were done in Chrome, so I could not replicate some of the tasks, such as the Search by Image function, as demonstrated. There was no discussion by the instructor of the different ways to complete these tasks in other browsers, though I did eventually receive help via the forum (after I had completed the final assessment). This often led to frustration on my part. If I had taken this course IRL, I would have been able to ask for clarification from the instructor.

3) Difficult tasks given short shrift: There were a few lessons that contained difficult concepts, such as using and interpreting results on WHOIS databases. There was little time spent discussing and demonstrating how to use these databases (although the instructor acknowledged the difficulties of using them), yet being able to do so was part of the final assessment. As a student, this was extremely frustrating and I quickly gave up trying to figure it out by myself (my frustration is demonstrated with some rather derogatory doodles next to my notes on this lesson and a final assessment of the lesson as “useless”). Again, IRL instruction would have afforded me the opportunity to seek clarification on these muddy points and perhaps encourage the instructor to extend the time spent on the databases.

4) Chug and plug assessment: While the practice activities required direct application of skills, the assessments were multiple choice and fill-in-the-blank problems that, for the most part, simply required regurgitating information from the instructor’s demonstrations. At this point, I’m not really certain of how much of the course I have really learned and internalized and how much I’ve simply managed to maintain in my short-term memory.

5) Forum confusingly organized and asynchronous: The few times that I did try to use the forum, I had difficulty navigating it. It was supposedly organized by lesson, but I could never find a direct link to the discussion threads for a specific lesson and it seems that most people just posted wherever they felt like it. When I posed questions, I did not receive immediate (or even proximal) feedback; the earliest I received an answer was a little over 24 hours after posting the question. Of course, one aspect of open online learning that MOOCs bank on is student participation; they count on the fact that other students are probably online when questions and comments are posted and are likely to respond faster than forum moderators. However, in this particular MOOC students did not seem particularly eager to help each other out or respond to each others’ posts, and all of my questions were answered by forum moderators.

What does this mean for MOOCs?
My initial response to the idea of MOOCs was hesitantly hopeful. Having completed one, I’m pretty much stuck with the same reservations about them that I have for tuition-based online courses. They are inherently more suited to certain types of students, i.e., those who are highly motivated, self-aware learners with good time management skills and a high tolerance for working alone and not having immediate access to and feedback from their instructor and classmates.

In terms of instruction, it requires as much, if not more, effort to make online instruction engaging because it’s far easier for students to become disengaged with an online course, especially one that’s free and has no extrinsic motivations to stay connected and finish. The one thing that’s possible in online course design that MOOCs cannot capitalize on, due to their massive size, is individualizing instruction. I’m not completely sure of the purpose of the pre-course assessment for Google’s MOOC (unless it’s simply for their own data collection purposes) because the rest of the course was not structured based on my answers to the initial assessment questions. IRL and in small online courses, diagnostic assessments allow for individualization because you can use the information garnered to help direct students towards those materials that will be of most use to them in terms of the gaps in their prior knowledge.

My first MOOC was like the gingerbread house in Hansel and Gretel. It seemed to offer an educational paradise: no-cost, developed and delivered by domain experts (whose “certificate of completion” holds cache), flexible in terms of when and how I completed it, open in terms of whom I would be sharing the experience with. Unfortunately, the reality did not live up to the fantasy. Of course, unlike Hansel and Gretel, I could have left whenever I wished. Instead, I stuck it out to the bitter end, hoping to find some redeeming quality in something that held such promise.

What does this mean for hybrid and fully f2f courses?
We need to continue to figure out how to capitalize on the best aspects of f2f learning and online learning. Some variables remain the same, no matter what the medium of instruction. Boring is boring. Materials and activities need to be intellectually engaging and individualized to the greatest extent possible. Community is essential; students need access to their teacher and their classmates, whether it’s physically or virtually, and some of that contact needs to be synchronous (which is one reason that I think hybrid courses are so effective). Assessment needs to be formative, immediate, and authentic. And no type of assessment can measure engagement. I earned a pretty high score in the Google MOOC, a score that does not reflect the boredom and frustration that I experienced. While I certainly came away from the course with an extended set of Google search skills that I did not posses prior to the course, I’m not sure that I would have  completed the course had I been less motivated (the certificate of completion will help to pad my annual faculty review packet).

How many of our own students have walked away from our courses with A’s or B’s, despite boredom or frustration? If we base the success of our courses on the grades that students come away with, we’re ignoring the aspects of learning that MOOCs make obvious: the hardest working and most motivated students will succeed, no matter how poorly designed the learning experience. So, it’s important for students to have opportunities to share anecdotal feedback, not just at the end of the course, but from the very beginning and throughout the course. And it’s important that we be willing to act on that feedback.

In hindsight, I now recognize that it will be very difficult for designers of MOOCs to do this. In fact, it is difficult for MOOCs to enact most of the learning practices that I value: learning-centered instructional design; a skatepark-like learning environment; immediacy; flexibility; authenticity; hybridity; intimacy with the materials, ideas, and people who make up the body of the course. Instead of heralding MOOCs as the salvation of education, we need to recognize them for what they are: an alternative that works for some learners on some levels. However, it’s also an alternative that is still in its infancy and still has room to grow; in fact, I think that DS106 demonstrates what MOOCs are capable of with the right kind of instructors and objectives. Whether or not they can, as a general rule, get there is up for grabs. What makes DS106 work is that it is, like the best IRL course, a truly student-centered community, in that students develop and help assess the assignments. It’s a course completely devoid of sticks and carrots and completely built on the desire to be a part of a unique learning community.

This ideal of a free and open learning community built upon choice and intrinsic motivation is the real promise of MOOCs. But if we continue, as some institutions and companies do, to look to MOOCs as a vehicle for the mass-production and broad dissemination of canned content, we’ll never get there.

Hacking Assessment: Redesigning the Numbers Game

photo credit: davidfg via photo pin cc

In a recent post, I outlined some ideas that I have about integrating principles of game design into the FYC course. As I pointed out, I’m not all-out gung-ho about the idea of the gamification of education. It turns out that many of my reservations about this latest trend in reforming education are shared by game designers themselves. In her post “Everything Is Game Design,” game designer Elizabeth Sampat makes clear that the assumption that any group of practitioners can co-opt and apply the extremely complex and abstract principles at play in a successfully engaging (to some) game to any other domain is over-reaching:

Gamification” assumes all games share the same mechanics, which means everything that’s gamified is basically the same shitty game. Using badges and leaderboards and offering toothless points for clearly-commercial activities isn’t a magic formula that will engage anyone at any time. Demographics are different, behavior is different . . .

These are the same issues with gamifying the classroom that keep me from wholly embracing the concept. For one, the whole point of a game is that it is . . . well, a game. Games are voluntary. As soon as you force someone to join in a game, it stops being a game for them. It becomes a compulsory activity devoid of intrinsic value and all of the extrinsic rewards you can throw at them, while perhaps artificially increasing their motivation to play the game, cannot turn it back into a game, unless it’s in the negative sense. Even when we gamify a class, we’re still making the learning that takes place within that game compulsory and effectively negating any positive characteristics of gaming that we are attempting to channel. And, as Sampat points out, the characteristics that make any game engaging cannot be standardized. What works for one gamer doesn’t work for another. So, in many ways, game designers face the same kinds of issues and challenges that educators face.

Another point that I think has been largely overlooked in this debate is that, for the large majority of students (if not all), school is already a game. We have goals (behavioral or learning objectives), challenges (in-class activities, homework, exams, and standardized tests), and rewards (grades). We’ve got levels (grade levels based on age in K12 and hours-earned status in college) and leaderboards (A/B honor roll in K12 and President’s and Dean’s lists in college). And we have clearly defined roles (teacher as locus of power and expertise, student as powerless and largely silent novitiate). Some students figure out pretty early how to play the game. In college, these are the students whose identity is inextricably intertwined with their grades. “But I’m an ‘A student,'” they insist when faced with anything other than. Other students learn early on how to game the game. These are the students who know how to manipulate the system and those in charge of it and can often be just as successful at winning the game as their overachieving counterparts. But some students never learn how to play the game according to our rules. Others don’t want to play it because they see it for what it is.

Whether we realize it or not, we’re already playing games with our students. And it’s a numbers game. Play the game according to our rules and we’ll reward you with a high GPA and a diploma, with the promise that these things are the badges you need in order to level up to the American Dream. This kind of game is both irrelevant and counterproductive in a culture that is becoming increasingly participatory, rather than competitive, in nature (just read Share or Die: Voices of the Get Lost Generation in the Age of Crisis to get an idea of how important cooperation and collaboration is becoming for those graduating into the current economy). While many educators are fighting to reform the standardized, hierarchical forms of assessment that have been in place since the industrialization of education, until they are successful at effecting a wholesale paradigm shift and not just applying a false facade and calling it reform, we are forced (much like our students) to try to figure out ways to hack the game. As Sampet argues:

Finding the reward structures and the rules that are already in place, and figuring out how to make them more effective, is the key to making life better for everyone— not adding an additional layer of uninspiring mechanics that push us to engage with mechanics that already suck.

Just as games are not one-size-fits-all, assessment shouldn’t be one-size-fits-all, neither in terms of standardized criteria applied to all students nor evaluative formats used for all courses/disciplines. Just as each course has its own unique set of learning objectives, each course should have a different method for assessing how students go about achieving those objectives. I think it important to explore various assessment methods in an effort to find which is the most effective for a particular course. For example, I have found that a portfolio method is exceptionally well-suited for my composition courses, as it allows for the abstract nature of the writing process and the subjectiveness that characterizes the act of evaluating and valuing a piece of writing. But in trying to incorporate a portfolio system into my speech courses (both an introductory oral communication class and an advanced argumentation and debate class), I have had less success, though for different reasons (perhaps due to the differences among the students: freshman and upper level secondary-education majors, respectively). As much as the portfolio method places value on each student’s individual learning needs, goals, and achievements, within the current grades-based system, students in certain courses need to be able to visualize their learning at both a qualitative and quantitative level. So, what are the alternatives?

Peer Assessment
One option that is gaining ground is peer assessment. Cathy Davidson has successfully explored this method in her “This Is Your Brain on the Internet” class (read “How to Crowdsource Grading” for her description of the process and the thought-provoking debate that followed and “How to Crowdsource Grading: A Report Card” for an overview of her students’ responses to the method). Many MOOCs utilize peer assessment out of necessity. According to Debbie Morrison, within the MOOC environment, peer assessment results in an enhanced learning experience for the student, as grading their peers’ work requires a deeper engagement with course content.

I’ve utilized peer assessment in both of my speech classes to varying degrees and with varying levels of success. In my introductory speech class, the students work together at the beginning of the term to develop a checklist for an effective speech (I don’t use rubrics because, in my experience, they become just another hierarchical form of grading that allows students to retain many of the gaming habits they adopted in K12). They do this by watching several speeches on YouTube and creating individual lists of do’s and don’ts, which we then collate into a master list. For each speech, students are evaluated by five randomly selected anonymous peers, who use the checklist to assess the speech. The students are also filmed and they must use both the video and their peers’ checklists to compose an assessment of their speech that they post to an e-portfolio, along with all artifacts associated with the speech (outlines, bibliographies, slideshows, photos of visual aids, the video of the speech, etc.). For this particular class, I have found that a combination of self and peer assessment has been much more effective than a solely self-based assessment (which tended to be superficial) or even an instructor-based assessment (in which students received only one assessment, as opposed to five, and tended to focus more on improving their “grade” than becoming a more effective speaker). With the peer assessment method, students’ speeches are being evaluated by their audience and their focus becomes oriented towards improving their audience’s response to subsequent speeches.

I have tried this kind of peer assessment in my debate class with far less success. For one, the class is much smaller, and consists, for the most part, of a cohort of sophomore and junior-level secondary education majors. These students tend to be very cliquish and ironically conservative in terms of the practices they expect in the class; they tend to be “A-gamers” obsessed with acing the course and uncomfortable with the level of abstractness and improvisation involved in debate. As a result, they tend to assess their peers over-generously and resist critiquing one another (one class even admitted to giving each other positive assessments across the board because they didn’t want to “hurt someone’s grade”). They look to me as the expert, so their portfolio reflections tend to be focused on flattering me and the course and highlighting aspects of their performances from my point of view (“If I were the instructor, I would give this speech a [insert grade here]”). Despite my best efforts, these students are resistant to assessment formats that are not instructor-based. So what’s a disruptive pedagogue to do?

Contract Grading
While I was at first dismissive of contract grading based on the distaste I harbor for the artificially hierarchical nature of any type of grades-based assessment (and the name’s implications of a kind of capitalistic supply and demand relationship between student and teacher), I have become less dismissive of the method in terms of its ability to bridge the gap between my students’ need for a quantitative value to be placed on their learning and my own objective of encouraging them to recognize and become complicit in the qualitative value of that learning.

For one, I’m hoping that it will eliminate the specter of grades that haunts the course by directly addressing the students’ anxiety regarding their status in a course that has no exams or other easily quantifiable activities. Students will decide what grade they wish to work towards and will have a specific, objective set of criteria that they must achieve in order to earn that grade (yes, I know this sounds just like a syllabus with a traditional grading schema, but contract grading makes the implicit aspects of the traditional schema explicit and, in many ways, mimics the game design principle of starting at zero and gaining points as you go). Once the question of grades is out of the way, perhaps the students will be more willing to focus on learning and improving.

Secondly, contract grading requires student input in regards to the challenges that must be met in order to level-up (yes, I know I’m wading back into gaming territory, but, as I’ve argued, our goal should be figuring out what works for a particular course and cohort of students rather than a wholesale dismissal or acceptance of any one method or theory). Often, in order to earn an A or a B, students must complete additional learning tasks, sometimes choosing between several options, which they can be invited to develop. This aspect of contract grading is the one that I find most promising in terms of encouraging student investment in the learning environment. While I have long preached to students that, in the words of Lennon and McCartney, “in the end, the love you take is equal to the love you make,” contract grading makes student-centered initiative an explicitly integral component of the course.

Thirdly, contract grading will allow me to both address the students’ insistence that I fulfill the role of expert assessor and my wish for them to fulfill the role of deliberate and reflective practitioner. Different grades require different levels of mastery, so students who contract for a certain grade must revise and/or re-attempt assignments that don’t demonstrate mastery. While my debate students can’t re-do a live debate, they can complete a video re-enactment that improves upon their live performance or record a play-by-play self-critique using Voice Thread or screencasting software. In addition, some of the optional assignments can require peer or self-assessment or other types of reflective learning practices.

While I’m not completely comfortable with contract grading (just as I am not completely comfortable with gamification), I also recognize that other assessment methods are not working for my upperclassman and, as a result, are interfering with my efforts to push them beyond a superficial engagement with their learning in the course. I believe firmly that we must recognize our students’ needs, values, and histories; but we can’t pick and choose which of those we take into consideration when designing their learning environments. Sampet makes a point that I think is important for us to keep in mind in the process:

The core principle to remember is that game design is everywhere. Instead of trying to stick a crappy, half-formed game onto real life, the real challenge— the one that’s tough, the one that will bring the greatest results— is to fix the bad game design that’s all around us.

Students won’t be open to assessment that values quality over quantity or process over product until we recognize that our current assessment paradigm is a badly designed game that needs to be torn down and redesigned. Sampet suggests two questions to ask when considering whether or not something is badly designed:

  • What’s supposed to be the goal here?
  • Is this experience set up to help or hinder my ability to reach that goal?
I’m game.
Resources on Contract Grading
These are the sources that I consulted to help me to better understand the possibilities afforded by contract grading:


Building a Better Blogging Assignment Redux

photo credit: Mike Licht, NotionsCapital.com via photo pin cc

One of the sessions at last week’s THATCamp dealt with the issue of designing a better model of student blogging. You can view my Storify of the session here.

I thought that I would add some of my own ideas and discuss how I address some of the issues raised during the session (since, unfortunately, I couldn’t be there).

As noted on the session’s Google Doc, a major problem with requiring students to blog is that the large majority of them are unfamiliar with blogs, so we need to identify effective methods for acculturating them to the genre. Since I’m an advocate of immersive learning, I’ve found that many students begin to “get” blogging by spending a good deal of time actually doing it. But I’ve developed a few orientation assignments that help them get off to a good start.

  • Require students to locate, deconstruct, assess, and subscribe to blogs on topics that interest them: As homework during the first week of class, I have students locate several blogs on a topic that they’re interested in. They pick the best three and subscribe to them. While exploring blogs on their topic, they create a list of criteria for an effective blog. We use a class meeting to collate their criteria into a master list that they can then use as a checklist for their own blogs. Next term I’m planning to expand this assignment by having students work together to deconstruct a blog.
  • Teach them how to comment: This is something that I still struggle with. I provide students with several resources on commenting, including those mentioned at the session; nonetheless, many of them provide largely superficial comments. Next term I plan to have students read and assess comments on the blogs they’ve subscribed to and add their own comments. Similarly to the assignment above, students will work together to establish criteria for effective commenting.

A second, and equally important issue, is the logistics of blog management, both for yourself and the students: controlling pacing (so that you don’t have to deal with an influx of posts and comments at the last minute), encouraging engagement with the blogs (both their own and their peers’), and assessing the blogs.

  • Establish submission guidelines (and stick to them): I establish strict deadlines for post submissions and stick to them from the very first post. I generally make the deadline the night before class in the case of totally face-to-face courses. For my hybrid courses, the deadline is on the day that we do not meet. Either way, I set the deadline for a time well before I and other students need to access the blogs.
  • Encourage engagement with peers’ blogs: I require that students subscribe to each others’ blogs and read and comment on a certain number of them each week. I’ve tried to encourage more depth to their comments by staggering the due dates for posts and comments (generally they have 12-24 hours after the blog post deadline to read and respond to peers’ posts). I’ve had even better success this past term with combining this with rotating students’ roles between posters and readers/commenters. This allows them to fully focus on and engage in their role. This method requires reducing the number and frequency of posts for each student, but I think that the pay-off will be worth it, especially by placing as much emphasis on their comments on others’ blogs as on their own blog posts (which means that I’ll have to invest more time into assessing their comments somehow).
  • Make the blogs an integral component of the course: I try to immerse students in their blogs as much as possible because I’ve found that the more they blog, the better bloggers they become. I now require that all of their writing be done on their blog and I ask them to blog and comment on blogs as frequently as possible (at least once a week). I think that it’s a major mistake to have students blog but then not integrate the blogs into the classroom interactions in some way; this encourages students to view the blogs as secondary to the other class work. In my literature courses, the students’ blogs become the fulcrum for the class discussions. I encourage students to pick the most thought-provoking for us to look at together in class. In my FYC courses, I pick one model post each week for us to critique as a class, asking students to assess the post in small groups, looking for reasons why I selected the post as being a good model. Since the class uses Google+ as a virtual learning space, I also “plus 1” those posts that are especially thought-provoking, well written, and/or visually appealing (I encourage students to do this, as well); this provides students with almost instantaneous feedback and encourages those who might not have read and/or commented on the posts to do so. This also results in a type of gamification of the blogs, as some students begin to work to earn “plus 1’s” from me and their peers. Next term, I plan to also encourage students to use other social media to promote and “like” their peers’ posts.
  • Involve students in the assessment of their blogs: In a previous post, I outlined how I require students to self-assess their writing. I have been happy with the way I’ve asked students to create a portfolio of their blog posts to submit to me at the end of term, rather than assigning a grade to each individual blog post (I’ve tried to eliminate traditional grades as much as possible in my classes). Normally, I have students do this via a final assessment form that they fill in and submit to me via email, hyperlinking to specific posts that they want to include in their assessment, and discussing in detail why they selected them and how they demonstrate what they’ve learned about writing. But I’m considering remixing Mark Sample’s idea of a blog audit; I think that making their reflections public on their blogs will encourage an even deeper consideration of who they are as writers and what they’ve done as bloggers over the course of the term, mirroring the way that many bloggers use their blogs as reflective spaces. I also like his idea of having students revisit and revise some of their old posts, which is something I used to encourage students to do with their writing before I switched to blogs, and would like to re-incorporate into their portfolio creation.
  • Utilize formative and peer assessment: This is still something that I’m tweaking. So far, I’ve found my method for providing formative assessment effective (and students have indicated the same). What I haven’t been able to integrate as effectively is peer assessment. I would love to use a badge system, like Mozilla’s Open Badges, but I haven’t had the time to figure out the best way to do so (or if it’s even possible, since I don’t know how to code or if it’s necessary to know how to do so to use the program, two issues I’m hoping to remedy soon). In the meantime, I’ll encourage the use of readily available social media feedback systems such as Facebook’s “like” and Google’s “plus 1” buttons.

A third issue that seems to have been prevalent during the session is that of how to allow for disruption and alternatives within the blogging domain.

  • Allow/encourage alternative uses for blogs: Since I require that students publish all of their writings for the class to their blog, this means that sometimes their blog posts contain nontraditional material (although I always try to help students understand that, with the advent of photoblogs, vlogs, and podcasting, there is no longer such a thing as traditional blog content). For example, this term I’m requiring my FYC students to use Storify to create their annotated bibliographies and then embed their stories into their blogs for comment by me and their peers. Last term, my students participated in DS 106, which meant that their blogs became populated with memes, mashups, animated gifs, and sound clouds.
  • Disrupt the digital environment: Interestingly enough, as participants were discussing Mills Kelly’s ideas about disruptive pedagogies and then subsequently considering ways to disrupt student blogging, I was blogging about Paul Fyfe’s theory of teaching naked and considering how to disrupt the digital environments within which I ask my students to work. One idea that I blogged about that serendipitously showed up on the blogging session Google Doc is that of requiring students to engage with and use their blog posts in non-digital ways. I think that this is an aspect of student blogging that needs more attention and I hope that a conversation can develop around it.

These are just a few of the blogging methods that I have found effective and, as indicated, I’m still working at improving some of them. I encourage those who require their students to blog or who are thinking of doing so to help continue the conversation here, on my Storify of the THATCamp session, on Mark Sample’s THATCamp blog post, or on Twitter (use the #thatcamp hashtag).

The Role of Self-Assessment in Deliberate Practice

photo credit: dkuropatwa via photo pin cc

In my last post, I discussed the need for students to engage in deliberate practice. I think that this is especially true in First-Year Composition courses. For one thing, I’m not sure that we can really teach students how to write. I think we can give them some best practices to follow and show them models of good writing, but writing is one of those skills that you can only learn by doing. And writing, especially academic writing, is a complex skill that takes years to develop. And I only have 14 weeks ( or in the case of my current Summer short-term class, eight weeks).

The other problem that we face in the FYC classroom is the fact that our students come to us with such varied abilities and backgrounds in writing instruction. Some have had little instruction in writing or, if they have, it was poor instruction because they struggle to write coherent sentences and put them together in a logically-organized paragraph. Some have had intense instruction in a very structured form of writing (the old five paragraph, 5-7 sentences per paragraph, keyhole style of essay) that works well on standardized exams but does not allow for the varied disciplinary styles that they will be asked to tackle in college. Some have had their heads filled with a lot of bullshit do’s and don’ts: don’t start a sentence with and or but; don’t use the first-person pronoun; always start your essay with a catchy hook, preferably something that sounds cosmically philosophical; always place your thesis at the very end of your introductory paragraph. I always have my FYC students read a chapter from Surviving Freshman Composition by Scott Edelstein called “The Truth about Freshman Composition” because it does an excellent job of explaining the differences between the kind of writing instruction they received in high school and the kind that they will (hopefully) receive in college and it also dispels a lot of the writing myths that they almost certainly have been taught. Students are always surprised and sometimes even angry that so much of what they were taught in high school has not prepared them for writing in college and, in some cases, was just plain wrong. So I spend quite a bit of time forcing students to unlearn bad writing habits and learn new ones, only the new ones I ask them to learn deal less with how to write and more with how to think about what they’re writing and how to assess how well it accomplishes their purposes. I provide them with lots and lots of chances to deliberately practice writing an academic essay, and with each practice, I ask them to assess what went well and what didn’t go so well and what they need to focus on improving on in their next practice. Here’s my method for doing so.

Even though my students will eventually publish their essays on their blogs, I have them type them up in Word or Open Office first. For one thing, Word will catch some of the more blatant typos and grammar errors that wouldn’t be caught if they were composing within the Blogger dashboard. And if I happen to be using peer review that semester (some semesters I do, some I don’t), I always have them do so from hard copies, which are much easier to print out and read from Word. Some of the newer versions of Word even have a blogging template that will allow students to easily type their posts up in Word and then publish them to their blog.

Another pro of having students initially type their blog posts in Word is that I can have the students highlight and annotate their essays using Word’s commenting tool (I’ve tried Google Docs, but the commenting tool does not allow for the kind of detail that I need when providing my own annotations). I ask students to highlight and comment on any parts of the essay that they have questions/concerns about and to use the commenting tool to communicate their questions/concerns to me so that I can address them. Students rarely take me up on this offer, but some do, so I continue to encourage them to do so. But the real purpose of the Word version of their post is provide them the space to answer five questions that require them to assess the essay. The questions vary from semester to semester, depending on if I’m using peer review or if I’m focusing more on writing process or revision, but they always have the same goal: to encourage the student to reflect on their writing using their own judgement and valuation, rather than waiting for me to pass judgement on the piece’s value. Here’s the five questions I had students answer last semester:

  1. What do you think is working well in this blog post?
  2. What do you think is not working well in this blog post?
  3. What challenged you the most about this blog post and how did you overcome the challenge? If you didn’t overcome it, how will you deal with this challenge the next time?
  4. How successful were you in addressing the weakness that you and/or I identified in you last blog post?
  5. Do you have any questions for me?

The three questions that, to me, are the most essential are 1 (because I think it’s just as important that they be able to recognize strengths as weaknesses), 2, and 4.

After reading and annotating the student’s essay using Richard Haswell’s minimal marking method, I then focus my feedback on their answers to these questions. Sometimes, in the case of a student who is not adept at assessing their own writing, my feedback focuses on correcting their misconceptions about their writing. This past semester, for example, I had a student who was extremely resistant to self-assessment and refused to admit that there were weaknesses in her writing, so I spent my initial feedback efforts in trying to convince her of the necessity of taking an honest look at her writing; eventually, my frustration with her resistance got the better of me and I dedicated all of my feedback to listing all of the weaknesses in her essay (needless to say, her response was less than positive; she complained to the Department Chair about how mean and uncaring I was because I was constantly criticizing her writing). For those students who are more open to self-assessment and are, consequently, much better at honestly evaluating their writing, my feedback efforts are focused on providing tips and links to resources that will help them address their weaknesses. When one student expressed a dissatisfaction with her rough drafts, I suggested that she read Anne Lemott’s “Shitty First Drafts.” On the next self-assessment, the student thanked me for suggesting that she read it and said that it helped her out tremendously. She then suggested it to another student in her comments on a blog post in which they expressed frustration with the invention stage of the writing process.

Not all students act on my recommendations and even fewer pass them along to their peers, but at least their self-assessments provide a dialogue that is not encouraged in traditional, instructor-centered summative assessment models. And this dialogue continues throughout the semester, as students use their previous self-assessments and my feedback on them to answer the next. This dialogue culminates in the writing portfolio that students submit at the end of the term. In putting together their portfolios, students have a semester’s worth of assessments that provide a narrative map of their progress as writers. They can use these narratives to select representative pieces of writing and write their final self-assessment. But it’s only final in terms of that particular class. For the portfolio, I ask them to identify aspects of their writing that they still see as weaknesses and to discuss how they plan to continue to deliberately practice at eliminating those weaknesses from their writing.

photo credit: giulia.forsythe via photo pin cc

Because they have been conditioned by their K12 education to see the teacher as the sole authority in evaluating and valuing their learning, some students need guidance in assessing their own writing and a small minority will be resistant to doing so. But for those students who are willing to learn how to do so, self-assessment can mean much more productive practice and, based on my observations, results in more meaningful learning than that experienced by students who depend solely on their instructor’s summative evaluations. This past semester, I asked my two FYC classes to anonymously respond to a midterm course evaluation. One of the questions asked them what aspect of the course had helped them to improve their writing the most, and the majority of students indicated that the self-assessments had been one of the most helpful aspects of the course (second only to blogging). Here’s a few examples of students’ feedback on the self-assessments:

  • Having to specifically address issues in our writing through our [self-assessments] has helped me out immensely.
  • The instructor commenting on my writing and telling me how I can improve. FEEDBACK from the instructor helps a lot.
  • I really like how helpful you have been. I really like the [self-assessments] we get back each week.
  • I love the [self-assessments]. They help me. ALOT.

As an instructor who often struggles with doubts about the impact I am having on my students’ writing, that ALOT, though misspelled, really means A LOT.

By the way, if you’re wondering about why I needed to change the wording on the feedback, I don’t call the question sets self-assessments. I call them process memos or revision memos (again, depending on the focus of the course and the questions I’m asking them to answer). So, my students may not even realize that they’re engaging in the grading process (I don’t like the term grade, but that’s their only frame of reference and that’s what I’m ultimately required to do to their writing). And I’m not sure that it would be a good idea to muddy the water by telling them this. I don’t want them to start saying things like “I’d rate this essay at a B,” like their writing is an egg that met certain interior and exterior qualities at the time it was packaged.

My student self-assessment system is, as is everything I do as an instructor, a work in progress. There may be better questions that I can ask. And I’m not sure I’m very good at  teaching students how to evaluate their own writing. I’m interested in how others ask their students to assess their own learning and how they guide them in doing so. Please share your tips and experiences. How can we encourage students to assess themselves and be less dependent upon us as arbiters of their learning?