All Posts (6)

Sort by
What a privilege it is to be able to observe great educators practicing their craft!

Recently I had a chance to be in the classroom of Michelle Kovac, Salem High School's Marketing teacher. She was teaching Advanced Marketing. Two things stood out to me.

1. Mrs. Kovac did an excellent job of weaving AFL strategies and techniques into her classroom.

2. The strategies employed by Mrs. Kovac were highly successful IN PART due to the strategies themselves but MAINLY (in my opinion) due to the enthusiastic manner with which she employed them.

Let's start with the second thing I noticed - enthusiasm. In my interactions with teachers at various schools over the years I have often heard teachers bemoan the fact that while they have tried to use creative or new strategies they have been unsuccessful due to the weak level of their students. I would be overly "Pollyanna-ish" if I said that students had no bearing on the ability of a teacher to be effective. However, what I have noticed more often is that strong students mask poor teaching much more frequeently than weak students destroy great teaching.

Mrs. Kovac's Advanced Marketing class was an example of this situation. Advanced Marketing students are a diverse group. Some of them have been excellent students over the years. Some have struggled greatly. Some have had no disciplinary issues while others have had quite a few. Here's what they have in common, though. They are seniors in the spring - a time when seniors can be difficult to motivate.

I was amazed at what I saw in class that day. Mrs. Kovac's enthusiasm for the content was absolutely infectious. She acted as though Marketing was the coolest thing in the world, and as I sat in her class I began to to agree! She was a cheerleader, an entertainer, and a motivator - and the kids appreciated it. It was obvious that this was who she was in class on a daily basis because the kids thought it totally normal. Try faking enthusiasm on an occasional basis and students will see right through you.

The atmosphere is Mrs. Kovac's class was almost the way I envision an elementary classroom. What I mean is that these kids - these seniors - were excited to be there. They laughed. They joined in. When it was time to start working on projects they actually got up and RAN to get their supplies. One kid begged Mrs. Kovac to let her correct her quiz from the day before - not for points, not for a higher grade, just to be able to be correct. Mrs. Kovac finally "relented" and gave the student "permission" to correct her quiz!

When one student asked a particular question Mrs. Kovac said, "I feel a song coming on!" The entire class broke into a song about marketing. Seniors in high school willingly singing a song about Marketing in class - wow! That's what enthusiasm can do. It's what Parker Palmer describes in his book, The Courage to Teach. A teacher can lift up a class with his or her enthusiasm if the teacher has the courage to step out from behind the wall of safety that educators often erect. The courage that Mrs. Kovac showed to be herself, to be enthusiastic, and to share her love of her content is what made the assessment strategies she used work so well.

Here are the strong assessment strategies used that day by Mrs. Kovac:

Do Now Assignment - Predict Your Score
On the smart board were the numbers 3, 7, and 5. There were also 3 statements: "Guessed Correctly", "Guessed Wrong - Scored Higher", and "Guessed Wrong - Scored Lower". Students had to match a number with a statement. The day before students had taken a quiz and had predicted what their grade would be based on how well they had prepared for the quiz. For this day's Do Now assignment students had to match the numbers with the correct phrase. In other words they were trying to figure out that 3 students had correctly predicted their grades, 7 students had guessed wrong but scored higher, and 5 students had guessed wrong and scored lower.

So what are the assessment strengths here? Mrs. Kovac was training her students to analyze their preparation which in turn should help her students understand the role that preparation has in a student's success. This sort of feedback will hopefully encourage students to prepare more effectively in the future. Going back and analyzing how accurate their predictions were should help this knowledge sink in even more. It also gave Mrs. Kovac an opportunity to build them up by (enthusiastically) pointing out that they tended to underestimate themselves.

Why Did You Miss What You Missed?
When Mrs. Kovac handed back the students' quizzes she asked them to go over them and write down next to each question they missed why they missed it and what messed them up. She was not going to go over the quizzes with them that day. Instead, she told them that she first wanted to collect their feedback on why they missed what they missed. She told them that this feedback could alter how she goes over the quiz with them. She wanted it to be a learning experience rather simply listing out correct answers. When she went over the quiz with them the next day she wanted to be able to reteach/explain to them what they NEEDED to hear so they wouldn't miss the question next time around. This was a great example of a teacher collecting assessment data to guide instruction. She also told the students that she wanted them to get feedback for themselves so that they could ask appropriate questions. (By the way, this was when the one student begged to be able to correct her quiz.)

Analyzing the Competency List
Marketing classes teach based on a Marketing competency list the same way other courses might teach specific state or national standards. Mrs. Kovac had her students pull out their competency lists. The fact that they all had them and quickly pulled them out spoke volumes! Then they went through the competencies that they had recently covered and each student rated each of those competencies on a scale of 1-5 based on how well the student understood the specific competency. These students were fully involved in analyzing their own progress. Their competency list was becoming a study guide for the end of the year and a way for them to take ownership of their studies. Mrs. Kovac's students obviously did this sort of activity regularly because they were very familiar with the competency list. One of them even pointed out that she had forgotten to mention 2 of the competencies they had covered. Another kid excitedly pointed out that they were almost done with the list. When Mrs. Kovac (enthusiastically) asked, "Doesn't it feel good?" A chorus of students answered, "Yes!"

Mrs. Kovac's classroom is a good example of small ways to use AFL strategies to give students ownership of their own progress. Would those strategies work in any classroom? Yes - but they will work BEST when coupled with genuine enthusiasm.

Read more…

A Sports Analogy for Assessment

On page 96 of the book "A Repair Kit for Grading", the author (Ken O'Connor) draws a useful
analogy between performance-based assessment and a band or a sports team:

"It is critical that both teachers and students recognize when assessment is primarily for learning (formative) and when it is primarily of learning (summative). Students understand this in band and in sports, when practice is clearly identified and separate from an actual performance or game."

If we follow this analogy, then the final exam for a unit and/or course becomes the big game for
the sports team. If you are training basketball players, don't you think that the best way to test their abilities is to have them play a game? In this way the coach sets out the big game as the final exam, and in the same way all of the activities that lead up to that game are meant to help the players prepare for that game.

The diagnostic assessment is an initial activity that puts students in a simulated game to see what their strengths and weaknesses are. Once they have been identified, the formative assessments are the practice sessions that help students refine specific technical skills, build leadership skills, raise stamina and work on team building, all necessary for each player to perform at his/her best and for the team to win.

Note that in this case,

• All of the players clearly understand what is expected of them by the time the big game comes

• All of them understand what their individual and collective strengths and weaknesses are and are motivated to improve their skills in order to support the team.

• The coach wants the players to do their best and pushes the players to practice hard so they can do so.

• The team knows that the practices don't give them points in the final game, and for that reason its the game that counts and not the practices, although the more they practice the better they will play in the game. After the big game, the team evaluates its performance, draws up new strategies to improve and starts practicing again.

Designing a multi-stage, complex performance task as the final exam allows teachers to identify
all of the discrete skills students will need to perform well at the end so they can be practiced in low-stakes situations, tried out in scrimmage games and practiced again so that everybody feels ready for the big game. This movement back and forth between instruction and applying, between drilling discrete skills and performance of the whole task is what helps students learn well. It also helps them learn to learn, which is a capacity that comes in handy as the students take on further personal and academic responsibilities.

Although teachers don't give the same or similar tests more than once as coaches do, we do teach more complex skills that build on what students had to learn for the previous exam. In this way the capacities teachers aim to develop in our students by the end of the semester or year are complex and broad.

This analogy has provided me with a variety of new perspectives on assessment as well as some criteria to evaluate my own assessment strategies. I have become a better teacher by practicing this concept and I hope it gives others some valuable insight too.

Read more…

Assessment for Learning/Grading

Interesting ideas, there are those who would say if an assessment is graded (which is not the same as scored (rubrics)) it is probably not used by the student for learning as well as it could be.
Research from Wiliam would support this idea.
I have always maintained that is more a function of the culture in the classroom.
One thing is for certain if instructors will use more assessment that is not graded they will eventually get more buy in from students that assessment's major function is continuous improvement.
I would love to hear other points of view.
Read more…
When faced with a new concept it is natural and necessary to attach meaning to that concept. Sometimes when we find an understandable example of that concept we begin to confuse that idea for the concept itself. As Salem High School and the City of Salem Schools strive to master the concepts of Assessment FOR Learning, it is understandable that this will happen to some degree.

For example, earlier in the year we at SHS discussed a strategy of having a final test grade or portions of a final test grade replace the quiz grades that led up to that test. (Read about that here.) This method made the quizzes into practice assignments that prepared the student for the test. I began to receive some feedback from people saying that AFL wouldn't apply to their classes because this strategy for whatever reason did not fit into their classroom or teaching style. While this was a good example of AFL, it was just an example. AFL is bigger than any one practice, which led to this post on that topic.

Similar questions have arisen over time in regard to various other procedures that have been held up as examples of AFL. My post on philosophy v. procedures attempted to deal with the fact that AFL is much bigger than any one procedure.

Recently I have received feedback that shows that the practice of allowing students to retake tests and quizzes is being seen as the crux of AFL. While I have heard from many teachers who have used retakes as a way to allow students to learn from feedback, as was the case with tests replacing quizzes, AFL is bigger than retakes.

To help illustrate this I thought it might be useful to describe how AFL might have impacted my own classroom - if I hadn't left the classroom 6 years ago for the dark side of the force (administration)! :)

In my 9th Grade World History classroom my assessments and my grading were very closely related. While many of my graded assessments were AFL-ish (although I had never heard of AFL back then) I realize that I did not do enough assessing solely for the purpose of learning rather than grading. Here's how I assessed/graded:

1. Almost Daily Homework Assignments - 10 pts/assignment
Each assignment directly prepared students for the quiz the next day.

2. Almost Daily Quizzes - 30 pts/quiz
Often the same quiz was given several days in a row so that students could master the content.

3. Almost Weekly Tests - Range of 100 pts/test to 500 pts/test
Tests would build on themselves. A 100 pt test might cover Topic A. A 200 pt test might cover Topics A and B. A 300 pt test might cover Topics A, B, and C, and so on... By the time we got to the larger tests the students tended to have mastered the content because they had been quizzed and tested on it over and over - not to mention what we had done in class with notes, activities, videos, debates, etc.

So what would I do differently now that I have spent so much time grappling with AFL? Here are the changes:

1. Change in point values:
  • Homework would still be given but would either not count for points or all homework assignments would add up to one homework grade of approximately 30 points. Another idea I have contemplated would be that at the end of the grading period students with all homework completed would get a reward, perhaps a pizza party, while students with missing assignments would spend that time completing their work.
  • Quizzes would still be given almost daily but would now only count 10 or 15 points each. In addition, if a student's test grade was higher than the quizzes that led up to it I would excuse the quiz grades for that student.
  • Tests would count more. In the class I taught the tests were used as the ultimate gauge of mastery learning. The tests would continue to build on themselves but would probably start somewhere around 300 points and build up to around 800 points.
2. Change to How Quizzes are Viewed:
  • To build on the point I made above, the quizzes would be excused if the student's test grade was higher. The quizzes would be considered practice grades. Students would be trained to not fret about quizzes but to instead use them as ways to gauge their learning. I might even borrow Beth Moody's GPS idea occasionally and allow students to retake an occasional quiz; however, this would probably not be the case for most quizzes since whenever possible I would be repeating quizzes anyway.
  • The goal of quizzes would be to practice for the test. In the past I viewed the quizzes more as grades unto themselves. The problem with this, though, was that if I had four 30 pt quizzes before a 100 pt test, then the quizzes added up to more than the test. Adding in the four or five 10 point homework assignments further got in the way. Yes, they were assessments that helped the students learn, but they also had an inappropriate impact on the grade. They could help the student master the content as evidenced by the high test score while simultaneously lowering the student's grade.
3. Students Assessing Their Own Progress:
  • If I was in the classroom today I would add an entire new element of students assessing themselves. I would want students to take control of their own learning. and to know what they do and don't know. I would then want them to use that knowledge to guide their own studies.
  • One thing I would do would be to make sure that everyday (if possible) the students and I would both receive feedback. As I prepared my lessons I would ask myself the questions posed in this earlier post.
  • When I reviewed with students for tests I would change my method and adopt a strategy similar to this one used by Paola Brinkley and many other teachers in our building. (I would probably find a way to turn it into a game since I love playing games in class.)
  • At the beginning of each unit/topic I would give students a rubric like the one in this post. At some point during most class periods I would have the students use the rubric to assess themselves and see how well they are mastering content. They would then use the rubric as a study guide as described in the post.
  • I would also have students analyze their grades regularly so that they would know how well they needed to do on a test to reach their grade goal. (Implied in this is the fact that I first would have students regularly set goals.) I would use a strategy similar to this one used by Lewis Armistead.

Notice that my new plan for my classroom doesn't look incredibly different from my old one. I am assessing daily - which I was already doing - but I have changed my view on grading - it's no longer primary as it once was. Assessing is now different from and more important than grading. I have added more opportunities for students to assess themselves.

Notice that retaking tests was not a part of my AFL plan. Students are already taking multiple tests on the same content. Those tests are building in point value so that if you master it by the end that is outweighing your performance at the beginning. You are also being quizzed regularly and regularly assessing yourself. There really isn't a need for retaking the test. (Please realize that this does not mean that retaking tests should be frowned upon. It simply isn't the only way to use AFL.)

So does this mean that the plan I have outlined is how AFL should be done? NO NO NO NO NO! It's how AFL could be done. It is guided by AFL philosophies and ideas, but those same ideas could lead to very different procedures in other classrooms and with other content. AFL is big enough to go beyond certain practices and instead guide all good instructional practices.

Any thoughts?
Read more…

Lee Hodges (World Geography - ALMS) and I created a cross-curricular activity two years ago. This activity was similar to the show "Amazing Race."

The "Race"

-Groups of students are given a World Map and an answer sheet. Each group then receives their first clue. On this sheet there is a World Geography AND a Math question related to a place in the World. (For example: Big Ben) that they have to answer. After they record their answers, they get each of them checked, the Math by me, and the World Geography by Mr. Hodges. They show us where the place is located on the map and then they get their next clue. The first group to answer all 10 questions (5 math, 5 world geography) wins!


-We do not give this activity a grade. We use the information that they students provide as a basis for reteaching and new learning. If the students come to us with a wrong answer they must go back and either look the answer up, or try again. It's a great way for us to determine what needs to be reviewed for the SOL test. The best part is...the students don't even realize that we are "assessing" what they know. They just think it's a game!!!

If you are interested in obtaining a copy of this activity, please send me a message!

Read more…
Members of this network may have noticed a video that seems out of place on an educational social network. The video is of a post-game interview with NBA player Allen Iverson. Why in the world is that on here?

Salem High School teachers on this Ning know the answer to that. When our school first started taking a serious look at AFL, we realized right away that how you chose to grade assessments could negate the learning that they generated. In other words, if you use AFL strategies well they will lead to an increase in learning. Students and teachers will be using feedback to guide learning and instruction. However, if we want the student's grade to reflect the learning that occurred, we must be very careful and deliberate about how we grade (or don't grade) the assessments we give. Allen Iverson - believe it or not - has something to say about that. Watch the video and then I'll explain.

(If the video on this post didn't load right away, try reloading the page.)

It's been awhile since I've seen that video. Could someone refresh my memory about what he was "talkin' 'bout"? Oh, that's right - PRACTICE!

First of all, my posting this video is not in ANY WAY making a point about the need to practice when you're on a team. I'm not AT ALL an Iverson fan. It's just posted because it gives us an image to which we can relate - We're Talkin' 'Bout Practice!

How does this relate to grading? Think about your grades and your assessments. How many of them are "practice"? In other words, how many of your assignments are intended to help students practice so that they can learn? I bet you that most of them are. Now let's think about grading. How many points to you assign to these assignments? What would happen to a student who mastered the content, as evidenced by your final graded assessment, but did poorly on the practice assignments?

Let's get more direct: How many students are failing your class because they either didn't do or did poorly on your practice assessments? Do you have students who can pass your tests - or whatever your final graded assessment is - but fail your class? Why is this? It's because their practice assignments - the ones that were supposed to help them learn - are counting against them. Never mind that they mastered the material - or at least learned it to a level above failing. Never mind that you taught them even though they didn't do all your assignments. Their practice is causing them to fail.

By the way - I'm not saying here that practice isn't important. I think students should practice everyday in class and every night at home. But should practice be graded in a way that allows a kid who learned the content to fail the class or receive a grade that does not represent learning?

The Winter Olympics just ended. Some gold medals were won by less than 1/10 of second. What if the practice runs were then averaged in causing the gold medal winner to get a silver? That would be ridiculous. Our goal is to get kids to be able to learn and perform. If they do this then it's because of the job we did. Why would we then take a bunch of practice assessments and average them in with the assessments that really counted?

If we use AFL to increase learning but then grade poorly, we can end up negating the achievement. Take a look at your grade book. Examine why some students are failing. Remember - WE'RE TALKIN' 'BOUT PRACTICE!
Read more…

Blog Topics by Tags

Monthly Archives