All Posts (6)
On page 96 of the book "A Repair Kit for Grading", the author (Ken O'Connor) draws a useful
analogy between performance-based assessment and a band or a sports team:
"It is critical that both teachers and students recognize when assessment is primarily for learning (formative) and when it is primarily of learning (summative). Students understand this in band and in sports, when practice is clearly identified and separate from an actual performance or game."
If we follow this analogy, then the final exam for a unit and/or course becomes the big game for
the sports team. If you are training basketball players, don't you think that the best way to test their abilities is to have them play a game? In this way the coach sets out the big game as the final exam, and in the same way all of the activities that lead up to that game are meant to help the players prepare for that game.
The diagnostic assessment is an initial activity that puts students in a simulated game to see what their strengths and weaknesses are. Once they have been identified, the formative assessments are the practice sessions that help students refine specific technical skills, build leadership skills, raise stamina and work on team building, all necessary for each player to perform at his/her best and for the team to win.
Note that in this case,
• All of the players clearly understand what is expected of them by the time the big game comes
around.
• All of them understand what their individual and collective strengths and weaknesses are and are motivated to improve their skills in order to support the team.
• The coach wants the players to do their best and pushes the players to practice hard so they can do so.
• The team knows that the practices don't give them points in the final game, and for that reason its the game that counts and not the practices, although the more they practice the better they will play in the game. After the big game, the team evaluates its performance, draws up new strategies to improve and starts practicing again.
Designing a multi-stage, complex performance task as the final exam allows teachers to identify
all of the discrete skills students will need to perform well at the end so they can be practiced in low-stakes situations, tried out in scrimmage games and practiced again so that everybody feels ready for the big game. This movement back and forth between instruction and applying, between drilling discrete skills and performance of the whole task is what helps students learn well. It also helps them learn to learn, which is a capacity that comes in handy as the students take on further personal and academic responsibilities.
Although teachers don't give the same or similar tests more than once as coaches do, we do teach more complex skills that build on what students had to learn for the previous exam. In this way the capacities teachers aim to develop in our students by the end of the semester or year are complex and broad.
This analogy has provided me with a variety of new perspectives on assessment as well as some criteria to evaluate my own assessment strategies. I have become a better teacher by practicing this concept and I hope it gives others some valuable insight too.
- Homework would still be given but would either not count for points or all homework assignments would add up to one homework grade of approximately 30 points. Another idea I have contemplated would be that at the end of the grading period students with all homework completed would get a reward, perhaps a pizza party, while students with missing assignments would spend that time completing their work.
- Quizzes would still be given almost daily but would now only count 10 or 15 points each. In addition, if a student's test grade was higher than the quizzes that led up to it I would excuse the quiz grades for that student.
- Tests would count more. In the class I taught the tests were used as the ultimate gauge of mastery learning. The tests would continue to build on themselves but would probably start somewhere around 300 points and build up to around 800 points.
- To build on the point I made above, the quizzes would be excused if the student's test grade was higher. The quizzes would be considered practice grades. Students would be trained to not fret about quizzes but to instead use them as ways to gauge their learning. I might even borrow Beth Moody's GPS idea occasionally and allow students to retake an occasional quiz; however, this would probably not be the case for most quizzes since whenever possible I would be repeating quizzes anyway.
- The goal of quizzes would be to practice for the test. In the past I viewed the quizzes more as grades unto themselves. The problem with this, though, was that if I had four 30 pt quizzes before a 100 pt test, then the quizzes added up to more than the test. Adding in the four or five 10 point homework assignments further got in the way. Yes, they were assessments that helped the students learn, but they also had an inappropriate impact on the grade. They could help the student master the content as evidenced by the high test score while simultaneously lowering the student's grade.
- If I was in the classroom today I would add an entire new element of students assessing themselves. I would want students to take control of their own learning. and to know what they do and don't know. I would then want them to use that knowledge to guide their own studies.
- One thing I would do would be to make sure that everyday (if possible) the students and I would both receive feedback. As I prepared my lessons I would ask myself the questions posed in this earlier post.
- When I reviewed with students for tests I would change my method and adopt a strategy similar to this one used by Paola Brinkley and many other teachers in our building. (I would probably find a way to turn it into a game since I love playing games in class.)
- At the beginning of each unit/topic I would give students a rubric like the one in this post. At some point during most class periods I would have the students use the rubric to assess themselves and see how well they are mastering content. They would then use the rubric as a study guide as described in the post.
- I would also have students analyze their grades regularly so that they would know how well they needed to do on a test to reach their grade goal. (Implied in this is the fact that I first would have students regularly set goals.) I would use a strategy similar to this one used by Lewis Armistead.
Lee Hodges (World Geography - ALMS) and I created a cross-curricular activity two years ago. This activity was similar to the show "Amazing Race."
The "Race"
-Groups of students are given a World Map and an answer sheet. Each group then receives their first clue. On this sheet there is a World Geography AND a Math question related to a place in the World. (For example: Big Ben) that they have to answer. After they record their answers, they get each of them checked, the Math by me, and the World Geography by Mr. Hodges. They show us where the place is located on the map and then they get their next clue. The first group to answer all 10 questions (5 math, 5 world geography) wins!
AFL:
-We do not give this activity a grade. We use the information that they students provide as a basis for reteaching and new learning. If the students come to us with a wrong answer they must go back and either look the answer up, or try again. It's a great way for us to determine what needs to be reviewed for the SOL test. The best part is...the students don't even realize that we are "assessing" what they know. They just think it's a game!!!
If you are interested in obtaining a copy of this activity, please send me a message!
Salem High School teachers on this Ning know the answer to that. When our school first started taking a serious look at AFL, we realized right away that how you chose to grade assessments could negate the learning that they generated. In other words, if you use AFL strategies well they will lead to an increase in learning. Students and teachers will be using feedback to guide learning and instruction. However, if we want the student's grade to reflect the learning that occurred, we must be very careful and deliberate about how we grade (or don't grade) the assessments we give. Allen Iverson - believe it or not - has something to say about that. Watch the video and then I'll explain.
It's been awhile since I've seen that video. Could someone refresh my memory about what he was "talkin' 'bout"? Oh, that's right - PRACTICE!
First of all, my posting this video is not in ANY WAY making a point about the need to practice when you're on a team. I'm not AT ALL an Iverson fan. It's just posted because it gives us an image to which we can relate - We're Talkin' 'Bout Practice!