“If teaching was as simple as telling, we’d all be a lot smarter than we are.”

-Mark Twain

Teachers, how confident are you about the validity and reliability of the last test you gave to your students?  Did the assessment measure what you intended?  Are your results reliable?  How do you know?  Do you regularly and systematically analyze the results of each test (summative assessment) with your students?

On the surface, it would appear that the most important assessments in schools are the annual standardized tests.  These high-stakes tests, more accurately their results, command the attention of politicians and the media where the results are publicized and scrutinized.  They command an investment of tens of millions of dollars annually as communities across the nation hold schools accountable for student learning.  And frankly, schools (more accurately teachers) using this system to classify achievement will never satisfy the electorate.  If the results are increasingly positive, some will say the assessment is flawed, i.e., the test is too easy.  And conversely if the results are poor, then ‘our schools are failing’ will be the political mantra.

But while these politically important tests are concerned with measuring performance, how these results might actually achieve learning is not obvious.  Therefore the high-stakes tests pale in their contribution to school success when compared to the assessments teachers develop, administer, and use day to day in the classroom.

In 2003, a group of British researchers (Black, et al.) were interested in developing formative assessment practices with teachers.  Interestingly, the authors ‘tried to encourage teachers to steer clear of summative assessment as they developed their formative work, because of the negative influences of summative pressures on formative practice.  The teachers [involved in the Black et al. study] could not accept such advice because their reality was that formative assessment had to work alongside summative assessment.’[1]

Given the current data-driven, testing, and accountability culture of the U.S., it is essential that classroom teachers formatively analyze test results (summative assessments) regularly and systematically with their students.

Considering researcher Royce Sadler’s assertion that ‘the learner has to compare the actual (or current) level of performance[2] with the standard, and engage in appropriate action[3] which leads to some closure of the gap,’[4] it seems reasonable that students should examine their results of summative tests and engage in reflection to learn from their positive performance and from their mistakes.

How do teachers formatively analyze summative data with their students?  This question is important, because when you consider how many tests (results) students toss into the trash on their way out the classroom door it’s easy to conclude that the results, or more specifically, the learning the results were intended to measure, are not important.

I’ve recently been working with an on-line assessment platform called Naiku which allows students to take their classroom assessments (quizzes, tests, etc.) using almost any device which has an internet connection, e.g. computer, laptop, tablet, mobile phone, iPod, etc.  The platform has been developed so that it’s very intuitive to the user (students) and the interface (what students see on the screen) is aesthetic instead of looking and feeling institutional.  Students take the test and get instant results of their performance.

But what I appreciate about Naiku is that the developers created a way for students and teachers to formatively analyze and reflect on their learning (results).  For students, this analysis begins as soon as they complete the test.  Students see their results and examine each question in a reflective way; they see what the correct answer is and then respond to a set of prompts designed to engage students in the metacognitive process.  Students get to think about why they selected the answer they did and enter a response.  In addition to the quantitative score, students begin to analyze their results and provide feedback which helps them and provides feedback to the teacher.

Another feature which the Naiku developers have created is the ‘detailed item response’ which allows the teacher to examine each question (item) on the assessment and see how students responded.  This detailed view is far superior to the old Scantron format which is being utilized by so many teachers.  Compare the results from my classroom in these two images:

The Scantron results indicate that 60% of students missed item #33.  But that’s all the information you get.  Teachers will typically go back and look at the question on the test and perhaps ask students some questions in class to better understand the result, but the amount of information you can glean from this data is quite limited.  Furthermore it’s important to remember that using this “fill in the bubble” platform means that students don’t get their results until later and unless the teacher schedules some time to systematically facilitate the analysis of the results, there is very little chance that students will engage in any reflective process on their own.

In contrast to the Scantron data, the Naiku result is rich with assessment detail.  First, you can see the question and the response choices with the correct answer displayed in green and the incorrect responses in red.  You also see the quantitative results from the class for each response.  And you can see the reflective responses from students who missed the question e.g. don’t quite understand the concept, etc.  The developers are working on changing how this is displayed but this reflective analysis data from the students (self assessment a.k.a. formative assessment) is quite telling for this particular question.

Seeing this result from my students made me look at this question much more carefully.  As I reflected on how I taught this curriculum (American Government) to my students I was confused about how they could achieve this result.  We had talked extensively about the ‘Virginia Plan’ in class and had engaged in a comparative process between the New Jersey and Virginia Plans which were the two main ideas being debated at the Constitutional Convention.  But on closer examination of the wording of the item and the possible responses I concluded that the problem isn’t necessarily with my students’ understanding but rather the item itself.

Without going into a long explanation surrounding the Constitutional Convention or about the problems with this question, let me first say that this is a “book question.”  That is, this item came from the bank of questions that the textbook publisher put together to sell.  Sometimes the questions are okay, but many are not.  The language and composition of the item is often flawed and most often the questions are constructed to only test at the basic knowledge level of understanding.  Items can be constructed in such a way that students must exercise critical thinking and analysis in order to select the correct response.  But this question has several flaws with the most obvious being that both response B and C can be argued as ‘correct.’

All of this information was part of the conversation my students and I had when we systematically reviewed the data together, a.k.a. formative analysis of a summative test.  This process is reflective and metacognitive and their learning is deconstructed and then constructed againNaiku is a great tool to facilitate this process; the old Scantron bubble sheets are not.

I’ve sat through several presentations by companies who were ‘pitching their assessment wares.’  These companies are similar in that they’ve all created on-line assessment platforms which provide assessment detail and some offer large item banks with the platform.  However, what I most appreciate about Naiku is that they are developing this product with classroom teachers and they have been responsive to suggestions from teachers and students about how to improve the product and make it a better tool for learning whereas all the other companies are making tools for testing.

There are other tools within Naiku which have potential to help Professional Learning Communities with their work and I’ll be exploring these in successive articles.


[1] BLACK, P., ET AL. (2003) Assessment for Learning: Putting it into practice, Maidenhead, Open University Press
[2] Original emphasis
[3] Original emphasis
[4] SADLER, D. R. (1989) Formative Assessment and the Design of Instructional Systems. Instructional Science, 18, 119-144.