Skip to: Site menu | Main content

Peer Instruction in Computer Science

This is my attempt to collect resources related to Peer Instruction (PI) in Computer Science (CS). I am new to PI, having been introduced to it through a paper at SIGCSE '10. I certainly don't claim to have any special proficiency implementing or evaluating PI in CS. I've only used PI a couple of times. ... But, someone had to start a page like this, so here we are.

Below, I provide:

  • Abstracts and links to papers about PI, with a focus on PI-CS papers
  • ConcepTest and Reading Quizz banks (developed by me and other CS lecturers)
  • Hardware and software resources for implementing PI

If you have stuff you'd be willing to share, please let me know!

PI Methodology Papers

  • Crouch, C. H. and E. Mazur. 2001. Peer Instruction: Ten years of experience and results. American Journal of Physics. 69:970–977.
    A nice description of the PI methodology and its refinement in physics. The authors indicate that PI is meant to engage all students in class, rather than the few students who would be active otherwise. Performance on the physics Force Concept Inventory increased substantially when changing from traditional lectures to PI in both a calculus-based and algebra-based physics course. Quantitative problems do not bode well as ConcepTests; the authors describe how they ensure competence in working quantitatively as well as conceptually. The format of reading quizzes, the restructuring of discussion sessions, and ways to motivate students are included.
  • CROUCH, C. H., WATKINS, J., FAGEN, A. P., AND MAZUR, E. 2007. Peer instruction: Engaging students one-on-one, all at once. In Research-Based Reform of University Physics, E. F. Redish and P. J. Cooney, Eds. American Association of Physics Teachers.
    This paper expands on the above. It gives concrete advice for ConcepTest generation (e.g. ConcepTests should be unambiguous, require thought rather than rote application of an algorithm, etc.), and compares clickers to flashcards and raising hands. The paper also argues for the use of a ``predict'' ConcepTest preceding a classroom demonstration. Students who are given the opportunity to predict prior to the presentation (as opposed to only seeing the presentation without predicting) are much more likely to be able to explain the reasons for the observed outcomes when tested at the end of the semester. Another finding: while we might think that conceptual exam questions are easier than quantitative, problem-solving questions, the truth is that students often find conceptual questions even more difficult! The paper also describes the findings of a web-based survey of instructors around the world who use PI, giving indications of student learning, student satisfaction, instructor perceptions, use of reading quizzes, methods of polling, and most significant challenges to PI-adoption. (Polling was used by only 8% of instructors, but the survey was conducted in 1999.)
  • Beatty, I., Gerace, W., Leonard, W., & Dufresne, R. 2006. Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31-39.
    This paper gives concrete advice for generating clicker questions; I have found the advice to scale quite well for PI use. To effectively use an existing question, or to develop our own, we must appreciate the design logic of a question -- why the question is good, and what it seeks to uncover. Each question should have a threefold goal: a content goal (what subject material do we want students to learn?), a cognitive goal (how do we want our students to think?), and a metacognitive goal (what do we want students to learn about our subject in general?). Many tactics are given for designing clicker questions that accomplish these goals.

PI in Physics

  • Lasry, N., Mazur, E. and Watkins, J. Peer instruction: From Harvard to the two-year college. American Journal of Physics, 76(11), 1066-1069.
    This paper helps to legitimize PI in a small liberal arts college. Comparing a PI class with a traditional class found that the PI class performed better on a final exam (though not statistically significantly better, as was found at Harvard). Students were also divided into low-background and high-background groups, based on a pre-course FCI. Comparing post-FCI scores: low-background PI outperform low-background traditional, high-background PI outperform high-background traditional, and (the coolest finding) low-background students in PI outperform high-background students in a traditional offering.
  • Lasry, N. Clickers or Flashcards: Is There Really a Difference? 2008. The Physics Teacher, 46(4), 242-244.
    To answer the question in the title: no... and yes. No: on a comparison of post-FCI between students in a clicker classroom and students in a flashcard classroom, there was no significant difference. There was also no difference in end-of-semester exam performance. Yes: clickers are better because they give us precise real-time feedback, and help us archive data that can be used to improve our teaching.
  • Reay, N. W., Li. P., and Bao, L. 2008. Testing a new voting machine question methodology. American Journal of Physics, 76(2), 171-178.
    Our one-off clicker questions might indicate learning gains, but we don't know to what extent students can generalize a clicker question to other contexts. This paper suggests using multiple questions for each concept, in order to gauge students' understanding across contexts. The study in question used both a traditional section, and a section that used clicker questions (occasionally PI-informed) for a small portion of lecture time. Voting students scored higher on post-tests; males and females gained equally in the voting section, whereas males gained more than females in the traditional section.

What's Going on when They Talk?

  • James, M. 2006. The effect of grading incentive on student discourse in Peer Instruction. American Journal of Physics. 74:689.
    A common discussion involves tradeoffs between high- and low-stakes grading on clicker questions. Should students who get questions right get more points than those who get them wrong? According to this article, no: in high-stakes grading, discussions are more-often dominated by one person of the group, giving others less opportunity to give their perspectives. In the high-stakes setting, the person who dominated each group was more likely to get a higher grade than the others in the group. The author says that this means there is a correlation between dominating a PI conversation and amount-of-knowledge. I agree, but might it also reflect the benefits of PI conferred to these students through the very process of being able to talk it out?

Pi-CS Papers

Question Banks

Please see peerinstruction4cs.org for all of our PI materials.

Hardware and Software

  • i>clicker. This is the clicker system I use. These clickers have five option buttons, just enough for multiple-choice ConcepTests. I am on an old software version (5.2), because I have made several modifications to the source code that I'd sooner not keep re-implementing. (My version plays a sound effect when the timer starts and stops, and another sound when the graph is displayed. It also displays a textual description of the graph and makes other miscellaneous accessibility improvements.)
  • clickdata.zip. This archive includes a Python script I wrote to tell me useful information from my i>clicker .csv files. It expects to find L*.csv files generated by i>clicker, processes all such files in its directory, and generates session/pre-post/NG/histogram data. The histo file can then be further processed by prochisto.gp, a gnuplot script to generate a histogram from the data.
  • quizcode.zip. I use these Python scripts to administer reading quizzes. I wrote these in order to avoid having my students log-in to a course management system. I wanted students to be able to get to the quizzes quickly from the course website; all they do is type their student number and go. The quiz responses are stored in text files for easy reading or processing. The quizzes themselves are also created as text format for simple data entry, and the scripts support an availability date and an expiry date for each quiz. This extremely simple system has worked quite well for me.

Help!

If you have pointers to other PI-CS articles of interest, links to your own ConcepTests or reading quizzes, or other tools for processing PI data or administering PI in general, please get in touch. Thankyou!