In past posts, I’ve written about various ways that I use Google Docs (now Google Drive) in my courses, including collaborative writing and crowdsourcing annotations for the texts we are reading. Recently, I’ve experimented with a third use for another tool in the Google Docs collection: using Google Forms to crowdsource assessments of the students’ blog posts.
I’ve regularly discussed my struggles with assessment. This semester, this struggle has intensified as I have found it increasingly difficult to manage assessing and providing feedback on students’ work. This has some to do with the fact that I am teaching five classes, three of which are composition classes. But it also has a lot to do with the fact that all of my classes are now using the challenge-based learning model, so the work that students are doing is both more challenging and complex. This is especially true now that we are near the end of the term because this is where the most creative and cognitively dissonant work is done. I have found it difficult to adequately divide my attention between their regular writing assignments and the work they are doing behind-the-scenes. In trying to figure out how to take some of the onus off of myself without sacrificing timely feedback, I immediately thought of Cathy Davidson’s method of crowdsourcing grading. But as I’ve mentioned in my previous posts on assessment, I’ve met with some resistance from students who don’t want the burden or responsibility of providing negative assessments of their peers:
As a result, they tend to assess their peers over-generously and resist critiquing one another (one class even admitted to giving each other positive assessments across the board because they didn’t want to “hurt someone’s grade”).
One method that I have found to be relatively successful for overcoming these feelings is by making all assessments anonymous, especially in low-stakes, informal situations such as peer review. In considering how I could formalize anonymous peer assessment, I immediately thought of Google Forms. This Google app allows you to create a form that includes various types of questions, such as multiple choice, checklists, and open-ended. Once the form is completed by a respondent, the answers are automatically transferred to a spreadsheet. The creator of the form can then manipulate and share the results however they wish, including an option to view the results as a graphic summary. The sharing options are useful for sharing the assessment results with students and the summary option is a quick way to get an idea of overarching issues within the students’ work (as well as what the students’ strengths are).
Since it’s so late in the term, I decided to pilot peer assessment rather than integrate it as a formal course assignment (students are required to complete at least 2 assessments, but I am not assigning which peers they must assess). I created a form that is based on the list of criteria for a good blog post that the class worked together to create at the beginning of the term. In addition to these items, I added two open-ended questions that require students to offer some anecdotal feedback on their peers’ posts. Here is the form I created and an excerpt from the results summary:
Once I received the results, it was easy to share them with the students. I simply filtered the column for the title of post alphabetically so that all entries for a particular post were together. I then hid the column for the assessor’s name. Next, I selected the cells that applied to a specific post and downloaded the selection as a PDF that I emailed to the student (I wanted to include the column titles in the selection, so I simply moved down the spreadsheet, hiding the rows for each student’s post after downloading them and before selecting the next set of cells for the next post).
What I have found so far is that using Google Forms is an effective method for crowdsourcing assessment of students’ writing. Firstly, it’s quick and convenient for students to complete the assessment. Secondly, it allows for anonymity, eliminating students’ fears about offering negative feedback that may hurt their peers’ feelings or impact their interpersonal relationships with them in and out of class. Lastly, it provides authors with multiple pieces of feedback on their writing that is simply organized. The fact that some of these pieces of feedback may focus on different aspects of their work and/or may compete with one another is actually a positive, as it helps authors see how different readers focus on different aspects of a piece of writing and have different expectations and needs. I think this kind of assessment is also especially effective because as I tell students, when they write a blog post, I am not their primary audience; rather, their peers and anyone else who might be interested in their topic are their target audiences. By receiving feedback from their peers/audience, this reality is made tangible to them.
In a recent post, I outlined some ideas that I have about integrating principles of game design into the FYC course. As I pointed out, I’m not all-out gung-ho about the idea of the gamification of education. It turns out that many of my reservations about this latest trend in reforming education are shared by game designers themselves. In her post “Everything Is Game Design,” game designer Elizabeth Sampat makes clear that the assumption that any group of practitioners can co-opt and apply the extremely complex and abstract principles at play in a successfully engaging (to some) game to any other domain is over-reaching:
“Gamification” assumes all games share the same mechanics, which means everything that’s gamified is basically the same shitty game. Using badges and leaderboards and offering toothless points for clearly-commercial activities isn’t a magic formula that will engage anyone at any time. Demographics are different, behavior is different . . .
These are the same issues with gamifying the classroom that keep me from wholly embracing the concept. For one, the whole point of a game is that it is . . . well, a game. Games are voluntary. As soon as you force someone to join in a game, it stops being a game for them. It becomes a compulsory activity devoid of intrinsic value and all of the extrinsic rewards you can throw at them, while perhaps artificially increasing their motivation to play the game, cannot turn it back into a game, unless it’s in the negative sense. Even when we gamify a class, we’re still making the learning that takes place within that game compulsory and effectively negating any positive characteristics of gaming that we are attempting to channel. And, as Sampat points out, the characteristics that make any game engaging cannot be standardized. What works for one gamer doesn’t work for another. So, in many ways, game designers face the same kinds of issues and challenges that educators face.
Another point that I think has been largely overlooked in this debate is that, for the large majority of students (if not all), school is already a game. We have goals (behavioral or learning objectives), challenges (in-class activities, homework, exams, and standardized tests), and rewards (grades). We’ve got levels (grade levels based on age in K12 and hours-earned status in college) and leaderboards (A/B honor roll in K12 and President’s and Dean’s lists in college). And we have clearly defined roles (teacher as locus of power and expertise, student as powerless and largely silent novitiate). Some students figure out pretty early how to play the game. In college, these are the students whose identity is inextricably intertwined with their grades. “But I’m an ‘A student,'” they insist when faced with anything other than. Other students learn early on how to game the game. These are the students who know how to manipulate the system and those in charge of it and can often be just as successful at winning the game as their overachieving counterparts. But some students never learn how to play the game according to our rules. Others don’t want to play it because they see it for what it is.
Whether we realize it or not, we’re already playing games with our students. And it’s a numbers game. Play the game according to our rules and we’ll reward you with a high GPA and a diploma, with the promise that these things are the badges you need in order to level up to the American Dream. This kind of game is both irrelevant and counterproductive in a culture that is becoming increasingly participatory, rather than competitive, in nature (just read Share or Die: Voices of the Get Lost Generation in the Age of Crisis to get an idea of how important cooperation and collaboration is becoming for those graduating into the current economy). While many educators are fighting to reform the standardized, hierarchical forms of assessment that have been in place since the industrialization of education, until they are successful at effecting a wholesale paradigm shift and not just applying a false facade and calling it reform, we are forced (much like our students) to try to figure out ways to hack the game. As Sampet argues:
Finding the reward structures and the rules that are already in place, and figuring out how to make them more effective, is the key to making life better for everyone— not adding an additional layer of uninspiring mechanics that push us to engage with mechanics that already suck.
Just as games are not one-size-fits-all, assessment shouldn’t be one-size-fits-all, neither in terms of standardized criteria applied to all students nor evaluative formats used for all courses/disciplines. Just as each course has its own unique set of learning objectives, each course should have a different method for assessing how students go about achieving those objectives. I think it important to explore various assessment methods in an effort to find which is the most effective for a particular course. For example, I have found that a portfolio method is exceptionally well-suited for my composition courses, as it allows for the abstract nature of the writing process and the subjectiveness that characterizes the act of evaluating and valuing a piece of writing. But in trying to incorporate a portfolio system into my speech courses (both an introductory oral communication class and an advanced argumentation and debate class), I have had less success, though for different reasons (perhaps due to the differences among the students: freshman and upper level secondary-education majors, respectively). As much as the portfolio method places value on each student’s individual learning needs, goals, and achievements, within the current grades-based system, students in certain courses need to be able to visualize their learning at both a qualitative and quantitative level. So, what are the alternatives?
One option that is gaining ground is peer assessment. Cathy Davidson has successfully explored this method in her “This Is Your Brain on the Internet” class (read “How to Crowdsource Grading” for her description of the process and the thought-provoking debate that followed and “How to Crowdsource Grading: A Report Card” for an overview of her students’ responses to the method). Many MOOCs utilize peer assessment out of necessity. According to Debbie Morrison, within the MOOC environment, peer assessment results in an enhanced learning experience for the student, as grading their peers’ work requires a deeper engagement with course content.
I’ve utilized peer assessment in both of my speech classes to varying degrees and with varying levels of success. In my introductory speech class, the students work together at the beginning of the term to develop a checklist for an effective speech (I don’t use rubrics because, in my experience, they become just another hierarchical form of grading that allows students to retain many of the gaming habits they adopted in K12). They do this by watching several speeches on YouTube and creating individual lists of do’s and don’ts, which we then collate into a master list. For each speech, students are evaluated by five randomly selected anonymous peers, who use the checklist to assess the speech. The students are also filmed and they must use both the video and their peers’ checklists to compose an assessment of their speech that they post to an e-portfolio, along with all artifacts associated with the speech (outlines, bibliographies, slideshows, photos of visual aids, the video of the speech, etc.). For this particular class, I have found that a combination of self and peer assessment has been much more effective than a solely self-based assessment (which tended to be superficial) or even an instructor-based assessment (in which students received only one assessment, as opposed to five, and tended to focus more on improving their “grade” than becoming a more effective speaker). With the peer assessment method, students’ speeches are being evaluated by their audience and their focus becomes oriented towards improving their audience’s response to subsequent speeches.
I have tried this kind of peer assessment in my debate class with far less success. For one, the class is much smaller, and consists, for the most part, of a cohort of sophomore and junior-level secondary education majors. These students tend to be very cliquish and ironically conservative in terms of the practices they expect in the class; they tend to be “A-gamers” obsessed with acing the course and uncomfortable with the level of abstractness and improvisation involved in debate. As a result, they tend to assess their peers over-generously and resist critiquing one another (one class even admitted to giving each other positive assessments across the board because they didn’t want to “hurt someone’s grade”). They look to me as the expert, so their portfolio reflections tend to be focused on flattering me and the course and highlighting aspects of their performances from my point of view (“If I were the instructor, I would give this speech a [insert grade here]”). Despite my best efforts, these students are resistant to assessment formats that are not instructor-based. So what’s a disruptive pedagogue to do?
While I was at first dismissive of contract grading based on the distaste I harbor for the artificially hierarchical nature of any type of grades-based assessment (and the name’s implications of a kind of capitalistic supply and demand relationship between student and teacher), I have become less dismissive of the method in terms of its ability to bridge the gap between my students’ need for a quantitative value to be placed on their learning and my own objective of encouraging them to recognize and become complicit in the qualitative value of that learning.
For one, I’m hoping that it will eliminate the specter of grades that haunts the course by directly addressing the students’ anxiety regarding their status in a course that has no exams or other easily quantifiable activities. Students will decide what grade they wish to work towards and will have a specific, objective set of criteria that they must achieve in order to earn that grade (yes, I know this sounds just like a syllabus with a traditional grading schema, but contract grading makes the implicit aspects of the traditional schema explicit and, in many ways, mimics the game design principle of starting at zero and gaining points as you go). Once the question of grades is out of the way, perhaps the students will be more willing to focus on learning and improving.
Secondly, contract grading requires student input in regards to the challenges that must be met in order to level-up (yes, I know I’m wading back into gaming territory, but, as I’ve argued, our goal should be figuring out what works for a particular course and cohort of students rather than a wholesale dismissal or acceptance of any one method or theory). Often, in order to earn an A or a B, students must complete additional learning tasks, sometimes choosing between several options, which they can be invited to develop. This aspect of contract grading is the one that I find most promising in terms of encouraging student investment in the learning environment. While I have long preached to students that, in the words of Lennon and McCartney, “in the end, the love you take is equal to the love you make,” contract grading makes student-centered initiative an explicitly integral component of the course.
Thirdly, contract grading will allow me to both address the students’ insistence that I fulfill the role of expert assessor and my wish for them to fulfill the role of deliberate and reflective practitioner. Different grades require different levels of mastery, so students who contract for a certain grade must revise and/or re-attempt assignments that don’t demonstrate mastery. While my debate students can’t re-do a live debate, they can complete a video re-enactment that improves upon their live performance or record a play-by-play self-critique using Voice Thread or screencasting software. In addition, some of the optional assignments can require peer or self-assessment or other types of reflective learning practices.
While I’m not completely comfortable with contract grading (just as I am not completely comfortable with gamification), I also recognize that other assessment methods are not working for my upperclassman and, as a result, are interfering with my efforts to push them beyond a superficial engagement with their learning in the course. I believe firmly that we must recognize our students’ needs, values, and histories; but we can’t pick and choose which of those we take into consideration when designing their learning environments. Sampet makes a point that I think is important for us to keep in mind in the process:
The core principle to remember is that game design is everywhere. Instead of trying to stick a crappy, half-formed game onto real life, the real challenge— the one that’s tough, the one that will bring the greatest results— is to fix the bad game design that’s all around us.
Students won’t be open to assessment that values quality over quantity or process over product until we recognize that our current assessment paradigm is a badly designed game that needs to be torn down and redesigned. Sampet suggests two questions to ask when considering whether or not something is badly designed:
What’s supposed to be the goal here?
Is this experience set up to help or hinder my ability to reach that goal?
Resources on Contract Grading
These are the sources that I consulted to help me to better understand the possibilities afforded by contract grading:
My last post really got me thinking about what kind of learning environment I’d like to design for my next hybrid First-Year Composition course, especially after one of my former students responded to the question that I posed at the end of the post:
…do we want to challenge our students or do we want them to challenge themselves?
The answer to this question, according to my student, is that what we should try to achieve is a balance between the two. Sometimes you need someone else pushing and challenging you to challenge yourself and to meet those challenges. I think it’s a valid point. But how do we find that balance? And how do we know at what level we can safely challenge students without overwhelming, frustrating, and alienating them?
These are the questions that I’m grappling with as I begin designing my upcoming Hybrid FYC class.
Yesterday, I happened to read the article “Why Flip the Classroom When We Can Make It Do Cartwheels?” by Cathy Davidson. The article focuses on Duke’s Haiti Lab, an interdisciplinary experience that places students in a global research and learning laboratory in which their work has an impact beyond the classroom. This is exactly the type of challenge that I would like to present to my students. But how do I do so with very limited resources, just myself to make it happen, and a group of freshly-minted high school students, many of whom haven’t decided on a major and have no clue what they are good at or passionate about?
The central focus of the Haiti Lab is a problem. All of the students focus on this problem, just in different ways, using different methods, and while viewing the problem through different disciplinary lenses. So, the Haiti Lab presents the same kind of immersion and autonomy that I managed to establish in my first hybrid FYC class, just on a grander scale and in a way that flattens the classroom walls and makes the world the classroom. I’m not sure that I want to tackle the world just yet, especially on my own. So, I’ve decided to settle for making the university my students’ classroom for now.
As I mentioned in my last post, I’m a member of the 21st Century Classroom Initiative committee. We meet once a month and, in between meetings, individual members and specialized groups research issues related to the 21st century classroom and visit other campuses to look at models of 21st century classrooms. We post our research findings to a database and discuss the results and our own university’s progress each month during our face-to-face meetings. It’s an exciting committee to be on and it really has become an interdisciplinary effort. There are representatives from each college, various departments, and administrators and staff who are all focused on turning our university into a 21st century learning environment. The only group not represented on the committee is the students themselves. So, I’ve decided that maybe I should change that.
What if I asked my hybrid FYC students to help design a 21st century university? What if I allowed them to decide, with no financial restrictions, what their ideal university would look and sound like? How would classrooms look? How would classes be taught? What would be going on in the classrooms? What would be going on in other spaces? What other spaces would there be? What would they look like?
What if I asked my students to use their own passions and interests to research and create solutions for an outdated mode of education? Solutions that would impact their own education? What if I asked them to present their findings to the committee that is in charge of deciding which solutions to consider and adopt?
Would my freshman be ready to meet such a challenge? Would they be willing to do it?
At this point, I don’t have any answers to these questions. But I wonder how many questions the designers of the Haiti Lab had when they first began to think about creating an immersive, interdisciplinary, real-world learning experience? And I wonder if they waited until they had answers to all of those questions before they decided to go ahead with their vision?