formative (3)

Confusion over Formative Assessment

Salem High School Earth Science teacher, Wes Lester, recently sent me this link to a post on Edutopia about Formative Assessment (AFL). I found it to be an excellent post and worth reading, so I left a comment stating this. Because I left a comment I then received an email every time someone else posted a comment. One such comment made me realize that some people out there do not fully understand Formative Assessment or AFL.

Here was the comment:

Yes, I think formative assessment is important however it is not the only measure of a student's success. Unfortunately we are currently in an environment that places so much emphasis on formative and standardized testing. In my school, it seems as if the formal testing never ends. They are tested in September (a formative), October (SRI), January (formative), March (state test), April (SRI), and finally in May (formative) not to mention the unit test required by the district. The structure, lenght and environment that is created around these test are such that students become desensitised. In an effort to help make this over testing environment tolerable, I must come up with alternative ways of conducting my own assessments.

It has gotten to a point that the students moan when they are told that it's a testing day. Several pupils have even asked why there is so much testing. I candidly explained that testing won't go away and that even when you get older there are yet more test to come. (driver's test, SAT's, professional test, etc.) This explanation seemed to make it more palatable. In truth, I feel that these children are tested because of the demographics of the district and past performances. Neighboring counties within the same state don't administer nearly as many assessments.


This person has confused Formative Assessment with an official testing program. It's probably not this teacher's fault as it sounds as though the school district has bought into a specific benchmark assessment program and called it formative assessment. While benchmark tests and testing programs can be used as formative assessments, effective Formative Assessment is what occurs in a classroom each and everyday.

Formative Assessment is graded and it is ungraded. It is formal and it is informal. It is big and it is small. It is ANYTHING that provides the teacher with feedback on how well students are learning, and it is ANYTHING that provides students with feedback so they can guide their learning. It should not lead to students asking "why there is so much testing" or "moan[ing] when they are told that it's a test day." It should not be "the,,, measure of a student's success" but rather an indicator of how they are learning so that they can end up having success.

I'm glad that our school is encouraging teachers to view Formative Assessment as a tool/philosophy that can look different in each and every classroom.

Click here to read the entire post from Edutopia.
Read more…

A Sports Analogy for Assessment

On page 96 of the book "A Repair Kit for Grading", the author (Ken O'Connor) draws a useful
analogy between performance-based assessment and a band or a sports team:


"It is critical that both teachers and students recognize when assessment is primarily for learning (formative) and when it is primarily of learning (summative). Students understand this in band and in sports, when practice is clearly identified and separate from an actual performance or game."


If we follow this analogy, then the final exam for a unit and/or course becomes the big game for
the sports team. If you are training basketball players, don't you think that the best way to test their abilities is to have them play a game? In this way the coach sets out the big game as the final exam, and in the same way all of the activities that lead up to that game are meant to help the players prepare for that game.


The diagnostic assessment is an initial activity that puts students in a simulated game to see what their strengths and weaknesses are. Once they have been identified, the formative assessments are the practice sessions that help students refine specific technical skills, build leadership skills, raise stamina and work on team building, all necessary for each player to perform at his/her best and for the team to win.


Note that in this case,


• All of the players clearly understand what is expected of them by the time the big game comes
around.

• All of them understand what their individual and collective strengths and weaknesses are and are motivated to improve their skills in order to support the team.

• The coach wants the players to do their best and pushes the players to practice hard so they can do so.

• The team knows that the practices don't give them points in the final game, and for that reason its the game that counts and not the practices, although the more they practice the better they will play in the game. After the big game, the team evaluates its performance, draws up new strategies to improve and starts practicing again.


Designing a multi-stage, complex performance task as the final exam allows teachers to identify
all of the discrete skills students will need to perform well at the end so they can be practiced in low-stakes situations, tried out in scrimmage games and practiced again so that everybody feels ready for the big game. This movement back and forth between instruction and applying, between drilling discrete skills and performance of the whole task is what helps students learn well. It also helps them learn to learn, which is a capacity that comes in handy as the students take on further personal and academic responsibilities.


Although teachers don't give the same or similar tests more than once as coaches do, we do teach more complex skills that build on what students had to learn for the previous exam. In this way the capacities teachers aim to develop in our students by the end of the semester or year are complex and broad.


This analogy has provided me with a variety of new perspectives on assessment as well as some criteria to evaluate my own assessment strategies. I have become a better teacher by practicing this concept and I hope it gives others some valuable insight too.

Read more…

How do you really know if you taught "it"?

Note to teachers from Salem High School: This is a post about teaching, teachers, and students in general as opposed to a post about specific situations at Salem High School.

So after all the lesson plans have been created, all the class time has been spent, and all the papers have been graded, how do you really know if you've taught your content well?
I might get under some people's skin with this post, but I want to get us to really think about our profession and WHY we teach.
So what's the answer to the question of how we know if we have taught our content well? If we're really going to live up to our calling, we must answer it this way: We know we have taught our content well if all our students have learned it and their grades reflect this.
Let's clear up one misconception before it has a chance to grow - our job is not to make sure all students get good grades. Our job is, however, to make sure that all students learn our content. That's the whole point of being a teacher - to get students to learn. It's also our responsibility to grade in a way that reflects the amount of learning. So while good grades are not our focus, learning is. And when learning occurs, if we grade properly, good grades will follow.
Ok, let's clear up another misconception before we proceed - saying that it's our job to make sure that all our students have learned does not absolve students of their role in the learning process. Obviously poor decisions by our students will end up impacting the amount of learning that occurs. However, we cannot control their decision making. We can, though, control how we teach and how we grade. Therefore, if what we are doing is not leading to the mastery of content, and if our grades are not accurately reflecting the level of mastery reached by our students, then it is incumbent upon us to do something about it. There is no room in education for complacency. Our attitude must be that IF THEY HAVEN'T LEARNED IT, THEN WE HAVEN'T TAUGHT IT.
I remember taking Macro-economics in college. Without going into too much detail, suffice it to say that while the professor may have "known his stuff", he was an absolutely lousy teacher. There must have been about 400 students in the class. I was only taking the class Pass/Fail. I really felt bad for my classmates as I looked at the posted grades after each test we took. I remember earning a 60 on the mid-term and having it curved to a B+. I really didn't care since it was Pass/Fail, but I remember thinking what a joke it was to say that this person was teaching. Obviously many of the students - myself included - were not putting the amount of effort into the class that we should, but how could that professor be satisfied with himself knowing that almost none of his 400 students were mastering the content in his course?
I envisioned this professor sitting with his colleagues in the departmental office complaining about "college students these days". While I wasn't around in his day, I really doubt there ever was a time when college students enjoyed boring lectures, no descriptive feedback, and undecipherable tests. We didn't learn it, and he didn't teach it.
So what is an appropriate level of failure for your students? Should you be satisfied if 70% master your content? 80%? 90%? While it's important to keep a certain level of reality mixed in with your idealism so that you don't go crazy, WE MUST HAVE THE ATTITUDE THAT WE ARE GOING TO STRIVE FOR 100% MASTERY. Notice I said strive. This means we will not be complacent. We will continue to tweak, change, try, experiment, etc. to always try and bring more students to mastery level.
This is where Assessment FOR Learning has it's greatest power. To some degree, it saddens me when teachers have difficulty incorporating - or worse, don't try to incorporate - an AFL philosophy into their teaching. The reason is because an AFL philosophy will lead to greater content mastery. To not incorporate AFL strategies into one's teaching is to be satisfied with the fact that you're not doing the best that you can to teach your students. Let me give an example of what I mean:
If you "teach" content and then give a summative assessment (a traditional test, for example) without lots of assessment along the way, you know what will happen. The students who are very dedicated workers and/or the students who can sit in class and "get it" will do very well on the test. The students who do little to no work outside of class or who can't just sit in class and "get it" will do very poorly. Another group of students will score somewhere in between. For years, teachers have satisfied themselves with this outcome by "blaming students". In other words, because some students almost always do well, the teacher convinces himself or herself that all students could have done well if they had either worked harder, paid more attention, or were more academically gifted. The teacher "knows" he or she taught the content because SOME students have mastered the content. This is a convenient defense strategy for teachers as it absolves teachers of the responsibility of making sure that students learn. YES, students have a role in it (as stated earlier), but we can't control all of their decisions. We CAN control how we teach, though.

The scenario in the above paragraph is very common in schools. Essentially, it is being satisfied with the bell curve of life. As educators, we have the privilege of smashing the bell curve. We have the opportunity to be the "difference-maker" in a kid's life. Too often this opportunity is squandered as we sell short our ability to alter the outcome of a student's learning. AFL - formative assessment - can be a powerful tool in our attempt to maximize that opportunity. And it really doesn't require much additional work on our part.
Take the example from 2 paragraphs above. If instead of "teaching" and then giving a summative assessment, the teacher would instead assess EVERYDAY, then an incredible difference could be made in the typical bell curve outcome. For example:
  • If everyday the students left class knowing what they know and being aware of what they have not yet mastered - this happens because of specific classroom assessment activities led by the teacher - then students will perform better on the summative assessment. Have you ever experienced a situation as a student where you thought you knew what was going on until you took the test? You studied, and you thought you understood the content. Then you took the test and realized you didn't know it at all. This is all too common - but it shouldn't be. If this is happening to students in your class then you need to apply more AFL strategies. This is a clear sign that you need to provide activities that require your students to assess themselves throughout the learning process so that they are acutely aware of how well they're doing and what they need to do to prepare for the summative test.
  • If students were quizzed/tested/assessed repeatedly leading up to the summative assessment, then the summative assessment would not catch them by surprise. Do you ever hear your students complain that they understood the content but were surprised by the types of questions on the summative test? Unfortunately, this is a common occurrence as well. It's a clear sign that a teacher has not employed an AFL philosophy. AFL is about using assessment FOR learning. Teaching and then giving a summative assessment only is AFG - Assessment FOR Grading. It's using assessment to find out how much people know. While this has to happen eventually - there is nothing wrong with a summative assessment - it does little to help the learning process. If students are assessed regularly - DAILY - then the feedback from the assessments will actually help them learn - thus the name, Assessment FOR Learning.
Let's clear up 2 more misconceptions:
  1. But what about students who still refuse to work? They could still come into class completely unprepared and fail the assessment? Of course. But they are also the outliers. Let's focus on the majority of students - the ones who do what we ask. Let's not lose a good strategy just because a few students continue to make bad decisions. HOWEVER, I would contend that those poor decision-making students would learn more if they were assessed daily and provided with opportunities to assess themselves - even if they didn't work hard outside of class.
  2. But what about rigor? Shouldn't a rigorous class by its very nature lead to a bell curve of sorts? The rigor in a class should not be demonstrated by the student grades that result. The rigor of the class is inherent in the difficulty of the content. However, assuming that the students who are in the class have been properly prepared and have academic strengths on par for the class, then there is no reason that students shouldn't enjoy great success in a rigorous course. Our job as teachers is to get students to learn. That is no less true in a rigorous class than it is in a "general level" course. Unfortunately, it is common for teachers in rigorous classes to feel that the rigor of the course justifies the lack of success of some students. Again - grades aren't the goal. Learning is. But if AFL strategies can lead to students in rigorous courses getting higher grades that are reflective of increased learning, then how could we not employ those strategies?
So, how do you know if you've taught your content? You know it if your students have learned it. And AFL strategies will help increase that learning - which is, after all, WHY we teach.
Read more…

Blog Topics by Tags

Monthly Archives