Monday, June 25, 2012

Forensic Assessment (A Formative Model)


In explaining my approach to assessment to my students, I often use a forensic, law-court analogy (Wright, 2009).  The students, I explain, are attorneys, presenting evidence of skills or content mastery on their own behalf.  I am the judge who considers the evidence and makes the ultimate determination.
            The course I've taught for the last seven years is divided into thirteen units, corresponding generally to a single performance standard or two or more closely related standards.  Over the course of the academic year, a student will receive thirteen grades, one for each unit (the grading system is discussed below).  Each unit consists of a number of assessments, ranging from reading comprehension exercises, performance tasks, primary source analyses, and traditional paper-and-pencil tests.  Rather than averaging each assessment into a single grade, the various assessments are treated as diagnostic in nature.  They help “make a case” for a student’s level of content mastery.  If a student hasn’t demonstrated, through their evidence, that they have attained mastery, formal and informal conferences are conducted to explain how their evidence is deficient, to determine what must be done to correct it, and to develop a plan for providing additional evidence.  While the class might need to progress to the next unit, a student who has yet to attain “content mastery” of a particular unit is permitted, and expected, to revisit the unit continuously until mastery is attained.
            While the forensic assessment approach is far from perfect, it has the advantage of permitting students to learn at an individualized pace, to demonstrate content mastery at various levels across the cognitive taxonomy, and to provide a more holistic description of a student’s level of content mastery than might be possible with a single summative assessment.  It allows academically gifted students to move at an accelerated pace. It also heeds Stiggins’ (2008) warning to avoid the trap of giving isolated consideration to aptitude, effort, compliance, and attitude.  Students are assessed only in terms of content mastery.  While it is acknowledged that aptitude, effort, compliance, and attitude may influence student progress, it is the responsibility of the student to parlay those factors into content mastery.
            Another benefit of the forensic approach to assessment is that a student is permitted to practice, explore, and even learn from mistakes without the fear of penalty.  Under grading systems that confuse summative and formative assessments, a student is often held permanently accountable for mistakes – e.g., misbehavior, confusion, less than adequate work, etc. –  even thought that student might ultimately demonstrate content mastery.
            For many educators, grades are primarily tools of reward and punishment, rather than feedback (Guskey & Bailey, 2001).  Without doubt, grades can serve as significant motivators or de-motivators.  Cumulative grade reporting, however, fails to adequately communicate content mastery.  A student, for example, might earn a 50% on a performance task.  In order to improve their grade, the student might then be allowed to revisit the content, repeat the performance task, and receive a new grade by averaging the two tasks.  If the student finally attains content mastery and earns a 100% on the second performance task, the cumulative grade would be 75%.  While passing, a 75% doesn’t communicate content mastery.  Teachers often cite reasons – e.g., fairness to other students, teaching responsibility, etc. – for using this manner of assessment, as though assessment is for teaching “life lessons” rather than guiding instruction and learning.  In formative assessment, however, grades should communicate progress toward content mastery (Stiggins, 2008).   Life lessons only obscure this communication, and don’t necessarily reflect real life anyway.  An individual taking a driving test, for instance, must demonstrate skills mastery in order to receive a driver’s license.  If they fail their test, they must revisit the content until they have attained mastery.  Once they have done so, regardless of the number of times they’ve tested, they receive the same driver’s license that all other drivers receive (not some devalued, averaged, or otherwise qualified license).
            In my classroom, if a student’s collective body of evidence for a particular unit does not reflect content mastery, their unit grade reflects that.  Once they achieve mastery, their grade is adjusted accordingly, regardless of how many attempts were required and without reference to past mistakes, confusion, or less than adequate work.

References

Gusky, T., & Bailey, J. (2001). Developing grading and reporting systems for student learning. Thousand Oaks, CA: Corwin Press.

Stiggins, R. (2008). An introduction to student-involved assessment for learning (5th ed.). Upper Saddle River, NJ: Pearson.

Wright, N.T. (2009). Justification. Downers Grove, IL: InterVarsity Press.


Thursday, June 21, 2012

Rethinking Bloom's Taxonomy

Disclaimer: The following comments are made in order to provoke discussion.  When it comes to the human mind, we are discovering new things each and every day.  Definitive statements, therefore, about how the brain works and how learning takes place must be taken with a grain of salt.  I am, in no way, dismissing Bloom's Taxonomy.  I am simply encouraging teachers, particularly gifted teachers who feel that instruction isn't quality unless it involves "higher order thinking skills" (or, as educators often say, HOTS, referring to the analysis, synthesis, and application levels of the taxonomy), to take a more critical look at the taxonomy.

I think that we are all on board with saying that Bloom's Taxonomy has value.  The problem, however, is in the degree to which his cognitive domain has been obsessed over and accepted, uncritically, to the point that it has become a quasi-religious creed in education.  I, by the way, use Bloom when designing performance tasks and assessments.  His adjectives for describing different levels of complexity in thinking are extremely handy.  Here are a few points of qualification I would make, though:

1. Bloom has been, since the 1950s, challenged by a number of scholars.  While it is extremely difficult (I would say scandalous) for anyone to claim proven-ness when it comes to cognition (new research debunks old assumptions on nearly a daily basis), the various challenges to Bloom, particularly Moore (1982) and Bereiter & Scardamalia (1998), demonstrate that there are, at least, a number of ways to "cut the cake."

2. Bloom's Taxonomy has seduced many into believing that the various levels of the domain are skills that, if mastered, could be simply transferred to various content domains (i.e., higher order thinking skills).  Learners, in fact, have varying levels of ability at different tiers within the taxonomy.  To revisit the critical thinking subject, depending on background knowledge and an individuals interests and talents, the same student may be able to think at the highest levels of the taxonomy in a preferred subject, but may not about to think at the lowest levels in another subject.  Bobby Fischer could engage in higher order thinking in chess, but in most other ways, he was incompetent and eccentric (not that these two always go together).   I was always (and still am) stronger in history, sociology, political science, etc., than I am in mathematics.   I have some ability to evaluate, synthesize and analyze in social studies, but practically no ability to do that in math.  This is because critical thinking has to be critical thinking about something.  While critical thinking might be a "broad purpose skill," it is not readily and broadly transferrable from domain to domain, and it does not exist divorced from subject-specific content: "The only thing that transforms reading skill and critical thinking skill into general all-purpose abilities is a person's possession of general, all-purpose knowledge" (Hirsch, 2006, p. 12).

3. Thinking at the "lower level" of the taxonomy is not "entry level" thinking, and thinking at the "higher level" is not "expert level" thinking.  The levels of thinking are, rather, interdependent.  We often describe "knowledge" and "comprehension" as "rote" knowledge.  I would argue that "rote" memorization is not really knowledge at all.  It is, rather, the development of reflex.  We have confused, therefore, the cognitive domain with the psychomotor domain.  I prefer the think of increases in the levels as a "completing" or a "rounding out" knowledge.

What are the implications of this on classroom instruction?  Well, we will try to continue to work that out in ongoing posts...

References

Bereiter, C., & Scardamalia, M. (1998). Beyond Bloom's Taxonomy: Rethinking knowledge for the knowledge age.  In A. Hargreaves, A. Lieberman, M. Fullen, & D. Hopkins (Eds.), International handbook of education change. Boston, MA: Kluwer Academic.

Hirsch, E.D. (2006). The knowledge deficit: Closing the shocking education gap for American children. Boston, MA: Houghton Mifflin.

Moore, D.S. (1982). Reconsidering Bloom's Taxonomy of educational objectives, cognitive domain.  Educational Theory, 32(1), 29-34.