In a recent post, I outlined some ideas that I have about integrating principles of game design into the FYC course. As I pointed out, I’m not all-out gung-ho about the idea of the gamification of education. It turns out that many of my reservations about this latest trend in reforming education are shared by game designers themselves. In her post “Everything Is Game Design,” game designer Elizabeth Sampat makes clear that the assumption that any group of practitioners can co-opt and apply the extremely complex and abstract principles at play in a successfully engaging (to some) game to any other domain is over-reaching:
“Gamification” assumes all games share the same mechanics, which means everything that’s gamified is basically the same shitty game. Using badges and leaderboards and offering toothless points for clearly-commercial activities isn’t a magic formula that will engage anyone at any time. Demographics are different, behavior is different . . .
These are the same issues with gamifying the classroom that keep me from wholly embracing the concept. For one, the whole point of a game is that it is . . . well, a game. Games are voluntary. As soon as you force someone to join in a game, it stops being a game for them. It becomes a compulsory activity devoid of intrinsic value and all of the extrinsic rewards you can throw at them, while perhaps artificially increasing their motivation to play the game, cannot turn it back into a game, unless it’s in the negative sense. Even when we gamify a class, we’re still making the learning that takes place within that game compulsory and effectively negating any positive characteristics of gaming that we are attempting to channel. And, as Sampat points out, the characteristics that make any game engaging cannot be standardized. What works for one gamer doesn’t work for another. So, in many ways, game designers face the same kinds of issues and challenges that educators face.
Another point that I think has been largely overlooked in this debate is that, for the large majority of students (if not all), school is already a game. We have goals (behavioral or learning objectives), challenges (in-class activities, homework, exams, and standardized tests), and rewards (grades). We’ve got levels (grade levels based on age in K12 and hours-earned status in college) and leaderboards (A/B honor roll in K12 and President’s and Dean’s lists in college). And we have clearly defined roles (teacher as locus of power and expertise, student as powerless and largely silent novitiate). Some students figure out pretty early how to play the game. In college, these are the students whose identity is inextricably intertwined with their grades. “But I’m an ‘A student,'” they insist when faced with anything other than. Other students learn early on how to game the game. These are the students who know how to manipulate the system and those in charge of it and can often be just as successful at winning the game as their overachieving counterparts. But some students never learn how to play the game according to our rules. Others don’t want to play it because they see it for what it is.
Whether we realize it or not, we’re already playing games with our students. And it’s a numbers game. Play the game according to our rules and we’ll reward you with a high GPA and a diploma, with the promise that these things are the badges you need in order to level up to the American Dream. This kind of game is both irrelevant and counterproductive in a culture that is becoming increasingly participatory, rather than competitive, in nature (just read Share or Die: Voices of the Get Lost Generation in the Age of Crisis to get an idea of how important cooperation and collaboration is becoming for those graduating into the current economy). While many educators are fighting to reform the standardized, hierarchical forms of assessment that have been in place since the industrialization of education, until they are successful at effecting a wholesale paradigm shift and not just applying a false facade and calling it reform, we are forced (much like our students) to try to figure out ways to hack the game. As Sampet argues:
Finding the reward structures and the rules that are already in place, and figuring out how to make them more effective, is the key to making life better for everyone— not adding an additional layer of uninspiring mechanics that push us to engage with mechanics that already suck.
Just as games are not one-size-fits-all, assessment shouldn’t be one-size-fits-all, neither in terms of standardized criteria applied to all students nor evaluative formats used for all courses/disciplines. Just as each course has its own unique set of learning objectives, each course should have a different method for assessing how students go about achieving those objectives. I think it important to explore various assessment methods in an effort to find which is the most effective for a particular course. For example, I have found that a portfolio method is exceptionally well-suited for my composition courses, as it allows for the abstract nature of the writing process and the subjectiveness that characterizes the act of evaluating and valuing a piece of writing. But in trying to incorporate a portfolio system into my speech courses (both an introductory oral communication class and an advanced argumentation and debate class), I have had less success, though for different reasons (perhaps due to the differences among the students: freshman and upper level secondary-education majors, respectively). As much as the portfolio method places value on each student’s individual learning needs, goals, and achievements, within the current grades-based system, students in certain courses need to be able to visualize their learning at both a qualitative and quantitative level. So, what are the alternatives?
One option that is gaining ground is peer assessment. Cathy Davidson has successfully explored this method in her “This Is Your Brain on the Internet” class (read “How to Crowdsource Grading” for her description of the process and the thought-provoking debate that followed and “How to Crowdsource Grading: A Report Card” for an overview of her students’ responses to the method). Many MOOCs utilize peer assessment out of necessity. According to Debbie Morrison, within the MOOC environment, peer assessment results in an enhanced learning experience for the student, as grading their peers’ work requires a deeper engagement with course content.
I’ve utilized peer assessment in both of my speech classes to varying degrees and with varying levels of success. In my introductory speech class, the students work together at the beginning of the term to develop a checklist for an effective speech (I don’t use rubrics because, in my experience, they become just another hierarchical form of grading that allows students to retain many of the gaming habits they adopted in K12). They do this by watching several speeches on YouTube and creating individual lists of do’s and don’ts, which we then collate into a master list. For each speech, students are evaluated by five randomly selected anonymous peers, who use the checklist to assess the speech. The students are also filmed and they must use both the video and their peers’ checklists to compose an assessment of their speech that they post to an e-portfolio, along with all artifacts associated with the speech (outlines, bibliographies, slideshows, photos of visual aids, the video of the speech, etc.). For this particular class, I have found that a combination of self and peer assessment has been much more effective than a solely self-based assessment (which tended to be superficial) or even an instructor-based assessment (in which students received only one assessment, as opposed to five, and tended to focus more on improving their “grade” than becoming a more effective speaker). With the peer assessment method, students’ speeches are being evaluated by their audience and their focus becomes oriented towards improving their audience’s response to subsequent speeches.
I have tried this kind of peer assessment in my debate class with far less success. For one, the class is much smaller, and consists, for the most part, of a cohort of sophomore and junior-level secondary education majors. These students tend to be very cliquish and ironically conservative in terms of the practices they expect in the class; they tend to be “A-gamers” obsessed with acing the course and uncomfortable with the level of abstractness and improvisation involved in debate. As a result, they tend to assess their peers over-generously and resist critiquing one another (one class even admitted to giving each other positive assessments across the board because they didn’t want to “hurt someone’s grade”). They look to me as the expert, so their portfolio reflections tend to be focused on flattering me and the course and highlighting aspects of their performances from my point of view (“If I were the instructor, I would give this speech a [insert grade here]”). Despite my best efforts, these students are resistant to assessment formats that are not instructor-based. So what’s a disruptive pedagogue to do?
While I was at first dismissive of contract grading based on the distaste I harbor for the artificially hierarchical nature of any type of grades-based assessment (and the name’s implications of a kind of capitalistic supply and demand relationship between student and teacher), I have become less dismissive of the method in terms of its ability to bridge the gap between my students’ need for a quantitative value to be placed on their learning and my own objective of encouraging them to recognize and become complicit in the qualitative value of that learning.
For one, I’m hoping that it will eliminate the specter of grades that haunts the course by directly addressing the students’ anxiety regarding their status in a course that has no exams or other easily quantifiable activities. Students will decide what grade they wish to work towards and will have a specific, objective set of criteria that they must achieve in order to earn that grade (yes, I know this sounds just like a syllabus with a traditional grading schema, but contract grading makes the implicit aspects of the traditional schema explicit and, in many ways, mimics the game design principle of starting at zero and gaining points as you go). Once the question of grades is out of the way, perhaps the students will be more willing to focus on learning and improving.
Secondly, contract grading requires student input in regards to the challenges that must be met in order to level-up (yes, I know I’m wading back into gaming territory, but, as I’ve argued, our goal should be figuring out what works for a particular course and cohort of students rather than a wholesale dismissal or acceptance of any one method or theory). Often, in order to earn an A or a B, students must complete additional learning tasks, sometimes choosing between several options, which they can be invited to develop. This aspect of contract grading is the one that I find most promising in terms of encouraging student investment in the learning environment. While I have long preached to students that, in the words of Lennon and McCartney, “in the end, the love you take is equal to the love you make,” contract grading makes student-centered initiative an explicitly integral component of the course.
Thirdly, contract grading will allow me to both address the students’ insistence that I fulfill the role of expert assessor and my wish for them to fulfill the role of deliberate and reflective practitioner. Different grades require different levels of mastery, so students who contract for a certain grade must revise and/or re-attempt assignments that don’t demonstrate mastery. While my debate students can’t re-do a live debate, they can complete a video re-enactment that improves upon their live performance or record a play-by-play self-critique using Voice Thread or screencasting software. In addition, some of the optional assignments can require peer or self-assessment or other types of reflective learning practices.
While I’m not completely comfortable with contract grading (just as I am not completely comfortable with gamification), I also recognize that other assessment methods are not working for my upperclassman and, as a result, are interfering with my efforts to push them beyond a superficial engagement with their learning in the course. I believe firmly that we must recognize our students’ needs, values, and histories; but we can’t pick and choose which of those we take into consideration when designing their learning environments. Sampet makes a point that I think is important for us to keep in mind in the process:
The core principle to remember is that game design is everywhere. Instead of trying to stick a crappy, half-formed game onto real life, the real challenge— the one that’s tough, the one that will bring the greatest results— is to fix the bad game design that’s all around us.
Students won’t be open to assessment that values quality over quantity or process over product until we recognize that our current assessment paradigm is a badly designed game that needs to be torn down and redesigned. Sampet suggests two questions to ask when considering whether or not something is badly designed:
- What’s supposed to be the goal here?
- Is this experience set up to help or hinder my ability to reach that goal?
“Contract Grading + Peer Review: Here’s How It Works” by Cathy Davidson
“What Is Contracted Grading” by Michelle Stephens
“Using Grading Contracts” by Billie Hara
“Avoid the Ranking Obsession: Contract Grading” by Nicole Papaioannou