Formative Assessment Inside and Outside the Classroom

Basic information

By formative assessment, I mean a process in which students receive feedback regarding their thinking and understanding while they are doing the work associated with learning. To further illustrate what I mean by formative assessment, contrast it with the more typical summative assessment. Often in schools, most assessment is summative: the teachers’ feedback is in the form of a grade and comments after students complete the test, paper, or homework. In this way, the assessment measures the result of learning but does not contribute (in any direct way) during the learning process.

Figure 1:  Summative assessment (only) model

In summative assessment there is no opportunity for the student to make thinking visible until the summative exam.  At this point, it is too late for the instructor use data from the student’s thinking to adjust content delivery, student’s preconception lens.  There is also no opportunity for the student to learn from their mistakes (i.e. to process again in light of the final grade).

In formative assessment, the student’s thinking is made visible early and repetitively during the course.  The instructor can use the data from the student’s thinking to adjust instruction priorities and new content delivery.  The instructor and/or the student’s peers can give the student feedback to correct any misconceptions in the student’s lens and the feedback serves as input for further student processing.  A summative grade can also be assigned.

Figure 2:  Formative (+ summative) assessment model

There are many ways to structure a classroom and assignments to incorporate formative assessment into student learning.  Below I will briefly describe my recent, current, and future work.

Recent work

Web-based homework — Recently I published a paper examining the effect of feedback immediacy on my General Chemistry students1.  In particular, I conducted a controlled experiment comparing the learning outcomes for students with traditional “paper” homework assignments versus students with web-based assignments.  The critical difference between these class sections was that those students who turned in paper homework experienced a two-day delay in receiving their graded homework back again while the students who submitted their homework online received their grades with feedback instantaneously.  The data from this experiment show that web-based homework was as effective as paper-based homework for student learning. One shortcoming of the web-based homework is that some students are likely to “game the system,” which can reinforce poor problem solving practices. This limitation, also seen in physics education research2, warrants future research.

Small group inquiry activities —  In a recent paper3 I examined the effectiveness of inquiry laboratory work in juxtaposition to “cookbook” laboratory work. Formative assessment is implicit in inquiry as it is defined by the National Science Education Standards4.  During an inquiry activity, amongst other things, a learner:

  • determines what constitutes evidence and collects it;
  • formulates explanation after summarizing evidence;
  • forms reasonable and logical argument to communicate explanations.

When these tasks are conducted in an environment of interaction with a small group of peers and a roving instructor, they naturally lead to continuous feedback for the students while they are learning. My research showed that students’ thinking is on a deeper level with inquiry activities in comparison to “cookbook” laboratories where most questions served only to clarify details of how to follow lab manual instructions. 

Current and future directions

Students will learn best during a class period if they continuously assess their understanding of the material.  For most students, copying notes off the board is woefully insufficient interaction - students simply rush to copy down the notes, postponing interaction with the material until later.  In her book Tools for Teaching, Davis cites research indicating that the typical student’s attention span is only 10 to 20 minutes5.  To keep students engaged, I have tried the following formative assessment techniques to form an interactive classroom.

Questions PLUS a minute to think — I try to avoid a typical classroom pitfall:  many teachers, after asking a question, rarely wait for more than a few seconds before giving prompts6.  In this case, most students will not engage themselves in thinking through the question asked, but will instead wait for the instructor or their fellow students to answer the question for them.  As a result, students are getting no feedback on whether or not they could answer the question (this again returns to my theme of formative assessment).  In contrast, I’ll often preface a question for my class with the request “Please don’t say anything yet.  Think about the problem and write down how you’d solve it.”  Then I stand in front of the class in utter silence for a full minute or more!  I find that with this method nearly every student interacts with the problem and is therefore invested in the subsequent discussion of the solution.  Measuring the impact of this technique is an area of current and future research.

Peer instruction —  To include peers in the students’ sources of feedback, I use Eric Mazur’s technique outlined in his book Peer Instruction7.  Using this method, the instructor uses a classroom response system (“clickers”) or hands out index cards with the letters A, B, C, and D on them.  Then a multiple-choice conceptual question is presented with these four choices.  The students have some time to think about the problem.  After an appropriate amount of time, the students simultaneously vote.  Now that they’ve committed to one answer, the students turn to their neighbor and discuss their answer and get feedback.  Then they vote once more.  Most likely, the class converges towards the correct answer.  I plan to investigate what impact this peer feedback has on long term memory of the concepts taught.

Directed peer-led discussion —  An important skill for upper-division students to acquire is the ability to talk about chemistry to experts and non-experts alike.  The extent to which students can talk about chemistry tells them pretty quickly whether or not they really understand what they’re talking about.  What better way then, to integrate a vigorous cycle of formative assessment, than to have each upper-division student lead class discussion on a topic in chemistry, fielding questions and comments from non-expert classmates?

In my upper-division classes I assign each student with the task of leading class discussion centered on a particular topic for a given week.  In preparation, all students must read a journal article supplied by the student leader in advance of the discussion and answer a set of questions related to it.  Included in these is the question “Write down two good questions about the article which do not know the answer to.”  These questions are written on the board prior to the first day’s discussion and the class works together over the subsequent days to formulate the answers.  While the leader guides the investigation, all of the students are learning very actively as they puzzle through their own questions.  Importantly, the leader is learning the most of all as he or she utilizes the very depths of their understanding to fulfill the role of expert.

Role of undergraduate researchers

Research in chemical education is interdisciplinary in nature often sharing aspects with research in the social sciences. This requires well-rounded investigators, such as those found amongst Calvin students. Working collaboratively with me, students engage in:

  • Experiment design Educational research requires careful planning. Student researchers examine the literature and propose educational experiments designed to avoid common pitfalls.
  • Intervention design Often experiments include design of new instructional practices. Students have insights and intuition that are a welcome addition to the plans of faculty who may not remember what it is like to begin on the journey of learning chemistry. 
  • Implementation and data collection There is a large amount of legwork needed to conduct educational research. This includes meeting ethical regulations for work with human subjects, recruiting subjects, collecting artifacts of learning outcomes, interviewing subjects, etc.
  • Data analysis Both quantitative, statistical analysis and qualitative, thematic analysis are used in my research. Students learn tools of the trade which will serve them well in many types of future research.
  • Writing. I work collaboratively with the students to communicate our work through peer-reviewed publications.

H. Fynewever, The Chemical Educator 13, 264 (2008).

A. M. Pascarella, in National Association of Research in Science Teaching (Vancouver, BC, 2004).

P. Meyer, H. H. Hong, and H. Fynewever, The Chemical Educator 13 (120-124) (2008).

4 National Science Education Standards. (National Research Council National Academy Press, Washington, D.C., 2000).

5 B. G. Davis, Tools for Teaching. (Jossey-Bass, San Francisco, CA, 1993).

M. B. Rowe, in Questions, Questioning Techniques, and Effective Teaching, edited by W. W. Wilen (National Education Association Washington, D.C., 1987).

E. Mazur, Peer Instruction (Prentice Hall, Upper Saddle River, NJ, 1997).


Herbert Fynewever

Herbert Fynewever

Associate Professor
Full profile


  • Course code:
  • Credits:
  • Semester:
  • Department: