Whether you love Google or hate it, there’s no denying the fact that the company is at the leading edge of open source apps and educational resources. And whether we like it or not, the majority of students are using Google as their primary research tool (and, according to a study summarized by Sarah Kessler, they’re not using it very effectively). I use Google apps extensively in my hybrid courses and, recognizing a need on my students’ part to learn how to use the internet more effectively and critically, I’ve begun to integrate the Google search engine into my research workshops. So when Google recently offered a MOOC entitled “Power Searching with Google,” I immediately signed up, hoping in the process to kill two birds with one stone: 1) to learn some Google search strategies that I could pass along to my students, and 2) to get a taste for the MOOC experience. It was a mixed bag.
In terms of set-up, the course was very straightforward. Lessons consisted of video demonstrations followed by activities designed to test your ability to apply the skills addressed in each video. Assessment consisted of a pre-course assessment (meant to gauge existing knowledge of Google search features), a mid-course assessment, and a final assessment. The scores for the mid-course and final assessments were averaged together to determine your “grade” for the course and a passing grade resulted in a certificate of completion. There was also a discussion forum that you could voluntarily participate in.
1) Individualized pace: While there were deadlines for the mid-course and final assessments, you could work through the course materials at your own pace as long as you were ready to meet those deadlines. This worked great for me because I could complete individual lessons or entire units as it suited me. Considering the hectic schedule I have this summer, this was by far the most effective aspect of the course for me.
2) Paced release of materials: While I could work at my own pace on the materials available to me, I was limited by the fact that the units were released at a graduated rate. This actually turned out to be a positive for me because, since I couldn’t see the entirety of the course materials at the beginning, I wasn’t overwhelmed by the amount of material I would need to cover and I remained focused on each set of materials I had access to.
3) Do-overs: Both practice activities and assessments were set up to allow multiple attempts at answering questions correctly. You could check your answers before submitting your assessments and wrong answers to practice activities usually triggered some feedback in terms of what to review in order to better understand the skill addressed in the activity. I found this to be a very effective method for learning because I didn’t have a fear of failure hanging over me that a single-attempt set-up would have created.
4) Leveling up or down: While I didn’t actually make use of it, there was the option to change the difficulty level of practice activities to either an easier activity or a harder activity. Again, I see this as being an effective method for individualizing assessment. There was also an option to skip activities and see the correct answers. This was effective for those search functions that I was already familiar with and didn’t necessarily want to waste my time trying out; being able to see the answers allowed me to self-assess my prior knowledge and move forward quickly if I wanted to.
1) Boring videos: I don’t expect lecture and demonstrations to be entertaining, but I do expect them to be somewhat engaging on an intellectual level. The videos were not long (the longest was a little over eight minutes), and this brevity was their only saving grace. It wasn’t just the fact that the instructor sat on a couch the whole time (I suppose in an effort to make the instruction feel more personal), but the content itself dragged in several lessons. Some lessons were far too simplistic and some were overly repetitive. A boring presenter is boring, whether IRL or on video.
2) Google Chrome required: All demonstrations were done in Chrome, so I could not replicate some of the tasks, such as the Search by Image function, as demonstrated. There was no discussion by the instructor of the different ways to complete these tasks in other browsers, though I did eventually receive help via the forum (after I had completed the final assessment). This often led to frustration on my part. If I had taken this course IRL, I would have been able to ask for clarification from the instructor.
3) Difficult tasks given short shrift: There were a few lessons that contained difficult concepts, such as using and interpreting results on WHOIS databases. There was little time spent discussing and demonstrating how to use these databases (although the instructor acknowledged the difficulties of using them), yet being able to do so was part of the final assessment. As a student, this was extremely frustrating and I quickly gave up trying to figure it out by myself (my frustration is demonstrated with some rather derogatory doodles next to my notes on this lesson and a final assessment of the lesson as “useless”). Again, IRL instruction would have afforded me the opportunity to seek clarification on these muddy points and perhaps encourage the instructor to extend the time spent on the databases.
4) Chug and plug assessment: While the practice activities required direct application of skills, the assessments were multiple choice and fill-in-the-blank problems that, for the most part, simply required regurgitating information from the instructor’s demonstrations. At this point, I’m not really certain of how much of the course I have really learned and internalized and how much I’ve simply managed to maintain in my short-term memory.
5) Forum confusingly organized and asynchronous: The few times that I did try to use the forum, I had difficulty navigating it. It was supposedly organized by lesson, but I could never find a direct link to the discussion threads for a specific lesson and it seems that most people just posted wherever they felt like it. When I posed questions, I did not receive immediate (or even proximal) feedback; the earliest I received an answer was a little over 24 hours after posting the question. Of course, one aspect of open online learning that MOOCs bank on is student participation; they count on the fact that other students are probably online when questions and comments are posted and are likely to respond faster than forum moderators. However, in this particular MOOC students did not seem particularly eager to help each other out or respond to each others’ posts, and all of my questions were answered by forum moderators.
What does this mean for MOOCs?
My initial response to the idea of MOOCs was hesitantly hopeful. Having completed one, I’m pretty much stuck with the same reservations about them that I have for tuition-based online courses. They are inherently more suited to certain types of students, i.e., those who are highly motivated, self-aware learners with good time management skills and a high tolerance for working alone and not having immediate access to and feedback from their instructor and classmates.
In terms of instruction, it requires as much, if not more, effort to make online instruction engaging because it’s far easier for students to become disengaged with an online course, especially one that’s free and has no extrinsic motivations to stay connected and finish. The one thing that’s possible in online course design that MOOCs cannot capitalize on, due to their massive size, is individualizing instruction. I’m not completely sure of the purpose of the pre-course assessment for Google’s MOOC (unless it’s simply for their own data collection purposes) because the rest of the course was not structured based on my answers to the initial assessment questions. IRL and in small online courses, diagnostic assessments allow for individualization because you can use the information garnered to help direct students towards those materials that will be of most use to them in terms of the gaps in their prior knowledge.
My first MOOC was like the gingerbread house in Hansel and Gretel. It seemed to offer an educational paradise: no-cost, developed and delivered by domain experts (whose “certificate of completion” holds cache), flexible in terms of when and how I completed it, open in terms of whom I would be sharing the experience with. Unfortunately, the reality did not live up to the fantasy. Of course, unlike Hansel and Gretel, I could have left whenever I wished. Instead, I stuck it out to the bitter end, hoping to find some redeeming quality in something that held such promise.
What does this mean for hybrid and fully f2f courses?
We need to continue to figure out how to capitalize on the best aspects of f2f learning and online learning. Some variables remain the same, no matter what the medium of instruction. Boring is boring. Materials and activities need to be intellectually engaging and individualized to the greatest extent possible. Community is essential; students need access to their teacher and their classmates, whether it’s physically or virtually, and some of that contact needs to be synchronous (which is one reason that I think hybrid courses are so effective). Assessment needs to be formative, immediate, and authentic. And no type of assessment can measure engagement. I earned a pretty high score in the Google MOOC, a score that does not reflect the boredom and frustration that I experienced. While I certainly came away from the course with an extended set of Google search skills that I did not posses prior to the course, I’m not sure that I would have completed the course had I been less motivated (the certificate of completion will help to pad my annual faculty review packet).
How many of our own students have walked away from our courses with A’s or B’s, despite boredom or frustration? If we base the success of our courses on the grades that students come away with, we’re ignoring the aspects of learning that MOOCs make obvious: the hardest working and most motivated students will succeed, no matter how poorly designed the learning experience. So, it’s important for students to have opportunities to share anecdotal feedback, not just at the end of the course, but from the very beginning and throughout the course. And it’s important that we be willing to act on that feedback.
In hindsight, I now recognize that it will be very difficult for designers of MOOCs to do this. In fact, it is difficult for MOOCs to enact most of the learning practices that I value: learning-centered instructional design; a skatepark-like learning environment; immediacy; flexibility; authenticity; hybridity; intimacy with the materials, ideas, and people who make up the body of the course. Instead of heralding MOOCs as the salvation of education, we need to recognize them for what they are: an alternative that works for some learners on some levels. However, it’s also an alternative that is still in its infancy and still has room to grow; in fact, I think that DS106 demonstrates what MOOCs are capable of with the right kind of instructors and objectives. Whether or not they can, as a general rule, get there is up for grabs. What makes DS106 work is that it is, like the best IRL course, a truly student-centered community, in that students develop and help assess the assignments. It’s a course completely devoid of sticks and carrots and completely built on the desire to be a part of a unique learning community.
This ideal of a free and open learning community built upon choice and intrinsic motivation is the real promise of MOOCs. But if we continue, as some institutions and companies do, to look to MOOCs as a vehicle for the mass-production and broad dissemination of canned content, we’ll never get there.