All Posts (134)

Sort by
Here is a conversation you will probably never hear:

Sea World Trainer 1: "I am so tired of these seals. They always want a fish every time they do anything!"
Sea World Trainer 2: "Tell me about it. It's like they don't understand how important the show is. They only care about getting fish!"

The other day I was talking with Jamie Garst, a Chemistry/IB Biology teacher at Salem High School. He mentioned that he recently decided to use Smart Pals (a plastic sleeve that allows an ordinary piece of paper to be used like a small dry erase boards) as a way to review in his classroom. (See previous post on using white boards to review) This was his first experience doing this with his students. As he was instructing them on what to do he told them that they would also need a blank sheet of paper. As he started to tell them the reason why, the kids said, "We know - it's to keep track of what we don't know." This was the first time Jamie had done this with his students. Therefore, their knowledge of what to do is evidence of the fact that someone had trained them. It's not natural for students to get out paper to assess their understanding. These kids had been trained by another teacher or other teachers in the school.
As educators, what do we want students to do?

We want them to learn for the sake of learning.
We want them to work hard because it's the right thing to do and because it leads to learning.
We want them to be internally motivated to do their best.
We want them to care more about learning than they do grades.

I think you'd be hard pressed to find a teacher who wouldn't agree that he or she wants those previous statements to be true for his or her students. However, we train them quite differently.

We train students to learn for the sake of getting a grade.
We train them to work hard or else they'll get a bad grade and because it leads to good grades.
We train them to be externally motivated by grades.
We train them to care more about grades than learning.

Think about it for a moment. The typical classroom at any grade level is not all that different from the seal show at Sea World. The student does the work; he gets a grade or points. The seal does the trick; he gets a fish. The student doesn't do the work; he doesn't get the grade or the points. The seal doesn't do the trick; he doesn't get the fish.

Have you ever assigned something and had students say, "Is this graded?" Have you ever felt like your students wouldn't work as hard if they weren't getting a grade? Have students ever complained that you weren't grading them after they put effort into an assignment or activity? Does it ever seem like all the students (and parents) care about is the grade on the report card or transcript?

Look back at the start of this post. Wouldn't it be ridiculous for the Sea World trainer to complain about the seal always wanting a fish for the tricks it does? Why is it not just as ridiculous for an educator to complain about a student always wanting to know if something is graded or about a student being motivated by grades rather than learning?

Perhaps the answer is because unlike the seal, the student is capable of rational and logical thought processes and should, therefore, know better. However, think about how students have been conditioned from day 1 in school. Do the work - get a reward. Now consider that this has been the case for generations. Is it any wonder that our students tend to be more externally than internally motivated? Is it any wonder that they tend to focus so much on grades and lose sight of the bigger picture of learning?

So what can be done about this? Is it possible to change years of conditioning to get to what we really want from students? Of course, if all teachers in the educational system made a change then we could definitely alter the situation; however, that's probably (definitely) a bit of a stretch. So can students be trained to be more internally motivated and to look at grades differently?

The story of Jamie and his students tells me that the answer is "yes". From my experience, the typical student expectation of a review activity is that the teacher will tell the student everything he or she needs to know - or ask all the questions he or she will eventually be asked - and then the student goes home and studies everything that will be on the test. (Or in some cases, doesn't study at all.) However, what Jamie found out was that his students were being conditioned to expect something different. They now expected that when a review was finished each student would leave class with a personalized list of what that student had not yet mastered. This personalized list would become the student's unique study guide. What Jamie experienced is an example of the fact that student expectations can be changed.

So what if teachers in your building stopped practicing AFG? AFG is Assessment FOR Grading. AFG is what I did very intentionally as a new teacher. I assigned lots of graded assignments so that I could have lots of grades in my grade book. The main purpose of my assignments and my assessments was to get grades in the grade book which could then average together to get a final summative grade. I used points as rewards and withheld points as a consequence. This use of AFG would naturally lead to my students thinking that everything they did had to be graded. I was training my seals - I mean, students - to work hard for the fish - I mean, grade.

AFL is so different. AFL is about assessing and assigning to gain feedback. It's about teachers and students using that feedback to guide learning. The whole point of the assessments and assignments is learning - thus the name, Assessment FOR Learning. This site is full of resources and ideas for applying AFL principles to the classroom.

I think that we can train kids to think differently about grades. It will take effort and a lot of change on our part. It will take great consistency, but it can be done. Until we truly begin applying AFL principles with this goal in mind, does it make sense for us to complain that students react exactly as we have trained them to react?

The best part of this is that if we alter their view of grades, we will ultimately increase their level of learning.
Read more…

A recent post on The Assessment Network titled Redos and Retakes? Sure. But don't forget to Loop! received a lot of attention via social media and led to quite a few productive discussions.  Without repeating all that was already shared in that post, the basic premise was this:

If we care about learning more than grading and if we want to communicate that to students, then we will need to understand that:

The power of assessment is greatly enhanced when Standards Based teaching and assessment practices - such as Looping - are interwoven into the daily instructional process.

This concept of Looping was juxtaposed with the common practice of allowing students to ask for Redos and Retakes. While Redos and Retakes were not directly discouraged, educators were encouraged to focus first on building reassessment into the very fabric of the learning process instead of waiting to reassess after students decide they don't like their grades.

The post and the concept of Looping generated quite a bit of feedback via social media.  A common response went something like this:

I really like the idea of Looping.  Could you share practical examples of what this might look like in a classroom?  

If you haven't read the original post yet, I would suggest doing so before moving on.  Once - or if - you have, then read below for very practical and applicable examples of Looping shared in her own words by Robin Tamagni, an Earth Science teacher at Salem High School in Salem, VA.


How do I loop in my class?

The first thing that I do is teach my Earth Science content to the best of my ability.  I try to explain and break down everything and have no assumptions that my students just ‘know what I’m talking about’.  Once I teach something, I make sure the very next day I go back and have my students practice it with one another, especially the vocabulary.  In Earth Science there is an abundance of new vocabulary that students have never heard of, so going back and practicing it every day with their partners is crucial for maintaining, establishing, and growing knowledge throughout the year.  I use partner quizzing of vocabulary words, flash cards, Quia.com and Kubbu for review games, and acronyms to help students remember the words.  This constant review and practice is Looping in its simplest form. 

Once we have taught and practiced the content, I assess my students.  Specifically, I like to use PowerSchool Assessment (formerly Interactive Achievement) so that instead of just finding an overall grade I can receive and give feedback in terms of mastery of specific content standards.  The data from the assessments shows me areas of strength and weakness for each individual student.  This is an example of what that data looks like for a student. 

11148391658?profile=original

Instead of just seeing a grade of 63%, PowerSchool Assessment provides me with more specific and standards-based feedback.  I learn that a student does better with the topic of Igneous Rocks, but struggles with Sedimentary and Metamorphic Rocks.  Therefore, I am able to focus on their problem areas so they can grow rather than waste their time and mine reteaching them everything about rocks.

As important as this standards-based data is for my decision making, it is even more important to get the data in the hands of my students so they can trained to let it guide their decision making. Training them to understand and interpret data is something I begin doing early in the school year and then am very consistent with all year long.  To help make the students' data meaningful, I give them what I call the "Weak Areas Sheet" (see example below or click link to download a Word file). 

11148391496?profile=original

On each student's Weak Areas Sheet I fill in the mastery feedback from PowerSchool Assessment into the blank for each assessed standard.  Now the students know exactly which specific topics they need to work on. 

On my classroom website, our school's other Earth Science Teacher, Wes Lester, and I have compiled a huge list of resources for practicing each specific standard.  (Visit Mrs. Tamagni's Study Center)  These practice activities include Quia "Who Wants to Be a Millionaire" games, YouTube video clips to reteach a topic, Kubbu games to practice sorting vocabulary, Purpose Games to practice labeling features of the earth, practice quizzes, etc.  Each standard has a list of these types of activities that are specifically labeled for easy access. 

11148392296?profile=original

My typical lesson planning involves giving students opportunities each week go to my website and work on their weakest areas.  Generally this looks like students taking about 15 minutes in class to login, pick 3 games in each of their weakest areas, and practice.  I ask them to complete the review game and then show me their results when they have done so.  I may use this as a Do Now/Bell Ringer activity or as an Exit Ticket activity.  If I find that I have 10 unexpected extra minutes near the end of class having my students get our their Weak Areas Sheet and doing some Looping is a practical and meaningful way to "fill that time". 

Looping in this manner also works great for students who are accelerated.  First of all, this method of assessment lets me know who has actually mastered the standards rather than just who happens to have a high grade.  It is rare to find a student who has truly mastered ALL taught standards.  However, when I do find someone who has reached this level I can let the student go ahead and start practicing standards that will be taught in the future, or I can give that student an opportunity to serve others by coaching peers who are weak in standards they're strong in. 

As the school year progresses, my looping practices expand somewhat.  By mid-year I have worked hard to create an abundance of practice stations in my room.  Each station correlates to a specific content standard. By mid-year, my students definitely know where their weak area(s) is (are).  I strategically pair students up with one another (one weak, one strong) and have them travel around my classroom to all the different stations beginning at their weakest standards.  The stronger student is coached on how to act as a peer teacher and to make sure they take the opportunity to explain and help their partner through their weaker standards. While serving as a peer coach does not come naturally to all students, if I focus on developing great relationships with my students they become more willing to work at it as a way to help me.

Opportunities for Growth

A final key component of my looping strategy involves "never letting go of the past".  Each time students take a test in my class they will always have a retest on old standards at the same time.  For example, students will take their first test on our Rocks and Minerals standards in September.  Then in October, they will take a test again on Rocks and Minerals and a separate test on Plate Boundaries.  Then in December students will take another test on Rocks and Minerals and Plate Boundaries, but this time we'll add in Earth’s History. 

This method of looping means that each time a student takes a test they have an opportunity to demonstrate growth - as opposed to just demonstrating how well they have learned (or memorized) the current content.  Let’s say a student scores a 60% on the first Rock and Mineral Test in September.  A 60% does NOT reflect what they will know about Rocks and Minerals by April.  Students are encouraged to continuously get better and grow in each standard instead of just moving on and forgetting about it. 

In October when we test again on Rocks and Minerals along with Plate Boundaries students will have worked on their weaknesses in the category of Rocks and Minerals and will hopefully show some sort of growth within that standard.  If a student has demonstrated growth I will replace the old score with their new score since that new score is now a better reflection of what they actually know.  If a student has scored the same or lower I will add that score to the grade book and use it as communication for where they need additional growth and practice.  

The Looping strategies I have described are essential to getting kids to learn and master content.  There is definitely a lot of infrastructure that must be created before it can be done well.  However, the payout is worth the effort.  

Thoughts or questions?  Feel free to leave comments below.  You can also reach Robin at her profile page on this network or email her at rtamagni@salem.k12.va.us.  Similarly, Scott can be reached at his profile page on this network or reached via email at scotthabeeb@gmail.com.

Read more…
I just had an opportunity to watch AFL principles being applied in an interesting manner in a teacher’s classroom. The teacher is Lewis Armistead. The class is Advanced Algebra/Trig. This class is dual enrolled with Virginia Western Community College and includes Math students ranging from pretty strong to our strongest. Today is the final day of the 3rd grading period and the final day of the semester here at Salem High School. All teachers in our school are required to verify their grades at the end of each grading period to ensure that the electronic grade book has the correct average. Most do this – as I did when I was in the classroom – by spot checking a few students in each classroom. Mr. Armistead, on the other hand, uses this as an opportunity to create a culture of students tracking their progress. Our school uses Student Planners/Agenda Books from Premier Agendas. In the front of those agendas we have several pages called the Record of Achievement (ROA). (see image below)

When I taught freshmen, our 9th grade teachers required students to use this ROA since keeping up with your grades was a skill that could help lead to academic success. I always figured, though, that requiring higher-level or older students to do this would be a little “Mickey Mouse”. After watching Mr. Armistead today I realized that I was wrong. Even the strongest and oldest students in the school can benefit from a teacher who requires them to use something like an ROA to track their progess. So here’s what Lewis did:
  • He had each student in the class take a moment to calculate his or her grade for the grading period. To do this the students had to look at their grades in their ROA - and of course they had to have been keeping their grades in their ROA.
  • He then had each student come up to him and compare their calculation with his grade book. If there was a discrepancy then they checked to find out why. If the numbers matched – which they appeared to do almost every time – then grades had been verified.
  • Once the grading period grade was verified they then calculated their semester averages and repeated the process.
  • He then went ahead and showed them the grades they would be receiving for the 4th grading period and had them set up their ROAs.
I share this practice for two main reasons: 1. I think it was a strong classroom practice that others might want to emulate. In order for this to work the teacher must have very consistent procedures and expectations and the classroom must be well-managed. I encourage everyone to add to their “toolbox” practices that lead to consistency. 2. It is an example of how the principles of AFL can be incorporated into all aspects of our classroom. Students in Mr. Armistead’s class have been trained to take all graded feedback and calculate the impact that it has on their grade. This is imperative if students are going to take ownership of their progress. We all wish students would do something like this. Instead of just wishing, Mr. Armistead has chosen to make it happen. I feel it important to continue to remind people that AFL isn’t "some big new thing" one does. AFL is more the reason and the philosophy behind the things that are done. If AFL principles guide us, then the things we already do will evolve and grow to more effectively provide teachers and students with useable feedback. Mr. Armistead’s practice is an example of this. Because of it, students are being trained how to use teacher feedback to guide their progress. I bet something like this could be applied to your classroom.
Read more…

56 Examples of Formative Assessments

David Wees, the Formative Assessment Specialist for New Visions Public Schools, has created a Google Slides presentation with 56 practical examples of formative assessments to use in the classroom.  For anyone looking for ways to expand their AFL toolbox, this is a no-brainer.

The presentation can be found at this site: https://docs.google.com/presentation/pub?id=1nzhdnyMQmio5lNT75ITB45rHyLISHEEHZlHTWJRqLmQ&start=false&loop=false&delayms=3000

Read more…
When faced with a new concept it is natural and necessary to attach meaning to that concept. Sometimes when we find an understandable example of that concept we begin to confuse that idea for the concept itself. As Salem High School and the City of Salem Schools strive to master the concepts of Assessment FOR Learning, it is understandable that this will happen to some degree.

For example, earlier in the year we at SHS discussed a strategy of having a final test grade or portions of a final test grade replace the quiz grades that led up to that test. (Read about that here.) This method made the quizzes into practice assignments that prepared the student for the test. I began to receive some feedback from people saying that AFL wouldn't apply to their classes because this strategy for whatever reason did not fit into their classroom or teaching style. While this was a good example of AFL, it was just an example. AFL is bigger than any one practice, which led to this post on that topic.

Similar questions have arisen over time in regard to various other procedures that have been held up as examples of AFL. My post on philosophy v. procedures attempted to deal with the fact that AFL is much bigger than any one procedure.

Recently I have received feedback that shows that the practice of allowing students to retake tests and quizzes is being seen as the crux of AFL. While I have heard from many teachers who have used retakes as a way to allow students to learn from feedback, as was the case with tests replacing quizzes, AFL is bigger than retakes.

To help illustrate this I thought it might be useful to describe how AFL might have impacted my own classroom - if I hadn't left the classroom 6 years ago for the dark side of the force (administration)! :)

In my 9th Grade World History classroom my assessments and my grading were very closely related. While many of my graded assessments were AFL-ish (although I had never heard of AFL back then) I realize that I did not do enough assessing solely for the purpose of learning rather than grading. Here's how I assessed/graded:

1. Almost Daily Homework Assignments - 10 pts/assignment
Each assignment directly prepared students for the quiz the next day.

2. Almost Daily Quizzes - 30 pts/quiz
Often the same quiz was given several days in a row so that students could master the content.

3. Almost Weekly Tests - Range of 100 pts/test to 500 pts/test
Tests would build on themselves. A 100 pt test might cover Topic A. A 200 pt test might cover Topics A and B. A 300 pt test might cover Topics A, B, and C, and so on... By the time we got to the larger tests the students tended to have mastered the content because they had been quizzed and tested on it over and over - not to mention what we had done in class with notes, activities, videos, debates, etc.

So what would I do differently now that I have spent so much time grappling with AFL? Here are the changes:

1. Change in point values:
  • Homework would still be given but would either not count for points or all homework assignments would add up to one homework grade of approximately 30 points. Another idea I have contemplated would be that at the end of the grading period students with all homework completed would get a reward, perhaps a pizza party, while students with missing assignments would spend that time completing their work.
  • Quizzes would still be given almost daily but would now only count 10 or 15 points each. In addition, if a student's test grade was higher than the quizzes that led up to it I would excuse the quiz grades for that student.
  • Tests would count more. In the class I taught the tests were used as the ultimate gauge of mastery learning. The tests would continue to build on themselves but would probably start somewhere around 300 points and build up to around 800 points.
2. Change to How Quizzes are Viewed:
  • To build on the point I made above, the quizzes would be excused if the student's test grade was higher. The quizzes would be considered practice grades. Students would be trained to not fret about quizzes but to instead use them as ways to gauge their learning. I might even borrow Beth Moody's GPS idea occasionally and allow students to retake an occasional quiz; however, this would probably not be the case for most quizzes since whenever possible I would be repeating quizzes anyway.
  • The goal of quizzes would be to practice for the test. In the past I viewed the quizzes more as grades unto themselves. The problem with this, though, was that if I had four 30 pt quizzes before a 100 pt test, then the quizzes added up to more than the test. Adding in the four or five 10 point homework assignments further got in the way. Yes, they were assessments that helped the students learn, but they also had an inappropriate impact on the grade. They could help the student master the content as evidenced by the high test score while simultaneously lowering the student's grade.
3. Students Assessing Their Own Progress:
  • If I was in the classroom today I would add an entire new element of students assessing themselves. I would want students to take control of their own learning. and to know what they do and don't know. I would then want them to use that knowledge to guide their own studies.
  • One thing I would do would be to make sure that everyday (if possible) the students and I would both receive feedback. As I prepared my lessons I would ask myself the questions posed in this earlier post.
  • When I reviewed with students for tests I would change my method and adopt a strategy similar to this one used by Paola Brinkley and many other teachers in our building. (I would probably find a way to turn it into a game since I love playing games in class.)
  • At the beginning of each unit/topic I would give students a rubric like the one in this post. At some point during most class periods I would have the students use the rubric to assess themselves and see how well they are mastering content. They would then use the rubric as a study guide as described in the post.
  • I would also have students analyze their grades regularly so that they would know how well they needed to do on a test to reach their grade goal. (Implied in this is the fact that I first would have students regularly set goals.) I would use a strategy similar to this one used by Lewis Armistead.

Notice that my new plan for my classroom doesn't look incredibly different from my old one. I am assessing daily - which I was already doing - but I have changed my view on grading - it's no longer primary as it once was. Assessing is now different from and more important than grading. I have added more opportunities for students to assess themselves.

Notice that retaking tests was not a part of my AFL plan. Students are already taking multiple tests on the same content. Those tests are building in point value so that if you master it by the end that is outweighing your performance at the beginning. You are also being quizzed regularly and regularly assessing yourself. There really isn't a need for retaking the test. (Please realize that this does not mean that retaking tests should be frowned upon. It simply isn't the only way to use AFL.)

So does this mean that the plan I have outlined is how AFL should be done? NO NO NO NO NO! It's how AFL could be done. It is guided by AFL philosophies and ideas, but those same ideas could lead to very different procedures in other classrooms and with other content. AFL is big enough to go beyond certain practices and instead guide all good instructional practices.

Any thoughts?
Read more…

Sometimes when you're learning a new skill or trying to figure out how to apply a new philosophy, it helps to watch that skill or philosophy being used or implemented in a totally different arena.  Thinking outside the box and adopting new ideas can be difficult when you're extremely familiar with your own domain.  Observing the skill or philosophy at work in someone else's domain is less threatening.  Once you are able to see the benefit of the skill or the power of the philosophy it might be easier to figure out how to include it into your personal realm of familiarity.

I think this might hold true for the application to the classroom of the philosophies of Assessment FOR Learning, Standards Based Grading, and Measuring Student Growth.

Below is a recent article Sports Illustrated article about the Oklahoma City Thunder's Kevin Durant.  As I read it I was struck by just how much sense it makes to assess for the purpose of learning (not grading), to grade and assess based on standards, and to intentionally and meaningfully measure growth.  It just makes so much sense when it comes to improving in life, as evidenced by this article about Durant's attempts to improve as a basketball player.  I wonder why it doesn't always make sense in the classroom where we educators are working tirelessly to get students to improve?

Read the article below for yourself, and as you do, pay attention to the intentional steps Kevin Durant has taken to improve his shooting.

  1. He is constantly - daily - assessing himself.
  2. He has broken down shooting into "standards" based on different locations on the floor.
  3. He is using the feedback from the assessments to determine what "standards" he needs to practice and where he needs to grow.
  4. His improvement is constantly being charted so that he and his personal trainer/shot doctor/video analyst/advance scout can keep adjusting the learning plan for maximum growth.

It just makes so much sense for him to do this.  Durant wants to grow, and this is how one intentionally sets out to grow.  

Likewise, it makes sense to me that every teacher would want to:

  1. Constantly - daily - assess students.
  2. Break down learning into standards based on content knowledge and skills.
  3. Use assessment feedback to determine which standards individual students need to focus on in order to grow.
  4. Constantly chart improvement so that learning plans can be adjusted for maximum growth.

So read the article below, look for the examples of Assessment FOR Learning, Standards Based Grading, and Measuring Student Growth, and then consider how you could better apply them to your classroom.

HOW 'BOUT THEM APPLES?

Copied from http://www.sportsillustrated.com and written by Lee Jenkins (@SI_LeeJenkins)

On the day after the Heat won their 27th game in a row, Kevin Durant sat in a leather terminal chair next to a practice court and pointed toward the 90-degree angle at the upper-right corner of the key that represents the elbow. "See that spot," Durant said. "I used to shoot 38, 39 percent from there off the catch coming around pin-down screens." He paused for emphasis. "I'm up to 45, 46 percent now." Durant wore the satisfied expression of an MIT undergrad solving a partial differential equation. You could find dozens of basic or advanced statistics that attest to Durant's brilliance this season-starting with the obvious, that he became only the seventh player ever to exceed 50% shooting from the field, 40% from three-point range and 90% from the free throw line-but his preferred metric is far simpler. He wants what Miami has, and he's going to seize it one meticulously selected elbow jumper at a time.

The NBA's analytical revolution has been confined mainly to front offices. Numbers are dispensed to coaches, but rarely do they trickle down to players. Not many are interested, and of those who are, few can apply what they've learned mid-possession. Even the most stat-conscious general manager wouldn't want a point guard elevating for an open jumper on the left wing and thinking, Oh no, I only shoot 38% here. But Durant has hired his own analytics expert. He tailors workouts to remedy numerical imbalances. He harps on efficiency more than a Prius dealer. To Durant, basketball is an orchard, and every shot an apple. "Let's say you've got 40 apples on your tree," Durant explains. "I could eat about 30 of them, but I've begun limiting myself to 15 or 16. Let's take the wide-open three and the post-up at the nail. Those are good apples. Let's throw out the pull-up three in transition and the step-back fadeaway. Those are rotten apples. The three at the top of the circle-that's an in-between apple. We only want the very best on the tree."

The Thunder did not win 27 straight games. They did not compile the best record. Durant will not capture the MVP award. All he and his teammates did was amass a season that defies comparison as well as arithmetic. They scored more points per game than last season even though they traded James Harden, who finished the season fifth in the NBA in scoring, five days before the opener. They led the league in free throws even though Harden gets to the line more than anybody. They posted the top point differential since the 2007-08 Celtics, improving in virtually every relevant category, including winning percentage. Their uptick makes no sense unless Durant was afforded more shots in Harden's absence, but the opposite occurred. He attempted the fewest field goals per 36 minutes of his career. He didn't even take the most shots on his team, trailing point guard Russell Westbrook, and he seemed almost proud that his 28.1 points per game weren't enough to earn the scoring title for the fourth consecutive year. "He knows he can score," says Thunder coach Scott Brooks. "He's trying to score smarter."

Durant is lifting Oklahoma City as never before, with pocket passes instead of pull-ups, crossovers instead of fadeaways. He remains the most prolific marksman alive, unfurling his impossibly long arms to heights no perimeter defender can reach, but he has become more than a gunner. He set career marks in efficiency rating, assists and every newfangled form of shooting percentage. "Now he's helping the whole team," says 76ers point guard Royal Ivey, who spent the past two seasons with the Thunder. "Now he's a complete player." The Thunder are better because Durant is better. Of course, the Heat will be favored to repeat as champions, and deservedly so. But Oklahoma City has been undercutting conventional wisdom for six months.

NBA history is littered with stars who languish in another's shadow, notably Karl Malone, Charles Barkley, Patrick Ewing and Reggie Miller through the Michael Jordan reign. Oklahoma City lost to Miami in the Finals last June, and Durant will surely be runner-up to LeBron James in the MVP balloting again. Durant is only 24 and is as respectful of James as a rival can be, but he's nobody's bridesmaid. "I've been second my whole life," Durant says. "I was the second-best player in high school. I was the second pick in the draft. I've been second in the MVP voting three times. I came in second in the Finals. I'm tired of being second. I'm not going to settle for that. I'm done with it."

"I'm not taking it easy on [LeBron]. Don't you know I'm trying to destroy the guy every time I'm on the court?"

Justin Zormelo doesn't have a formal title. He is part personal trainer and part shot doctor, part video analyst and part advance scout. "He's a stat geek," Durant says, expanding the job description. Zormelo sits in section 104 of Oklahoma City's Chesapeake Energy Arena, with an iPad that tells him in real time what percentage Durant is shooting from the left corner and how many points per possession he is generating on post-ups. After games, he takes the iPad to Durant's house or hotel room and they watch clips of every play. Zormelo loads the footage onto Durant's computer in case he wants to see it again. "If I miss a lot of corner threes, that's what I work on the next morning before practice," Durant says. "If I'm not effective from the elbow in the post, I work on that." Zormelo keeps a journal of their sessions and has already filled two notebooks this season. Last year Zormelo noticed that Durant was more accurate from the left side of the court than the right, and they addressed the inconsistency. "Now he's actually weaker on the left," Zormelo says, "but we'll get that straightened out by the playoffs."

Zormelo, 29, was a student manager at Georgetown when Durant was a freshman at Texas, and they met during a predraft workout at Maryland that included Hoyas star Brandon Bowman. Durant embarked on his pro career and so did Zormelo, landing an internship with the Heat and a film-room job with the Bulls before launching a company called Best Ball Analytics in 2010 that has counted nearly 30 NBA players as clients. Zormelo kept in touch with Durant, occasionally e-mailing him cutups of shots. They bonded because Zormelo idolizes Larry Bird and Durant does, too.

Durant left a potential championship on the table in 2011, when Oklahoma City fell to Dallas in the Western Conference finals. About two weeks after the series, Durant scheduled his first workout with Zormelo in Washington, D.C. "I didn't sleep the night before," Zormelo remembers. "I was up until 4 a.m. asking myself, What am I going to tell the best scorer in the league that he doesn't already know?" They met at Yates Field House, where Georgetown practices, and Zormelo told Durant, "You're really good. But I think you can be the best player ever." Durant looked up. "Not the best scorer," Zormelo clarified. "The best player." It was a crucial distinction, considering Durant had just led the league in scoring for the second year in a row yet posted his lowest shooting percentage, three-point percentage and assist average since he was a rookie. He was only 22, so there was no public rebuke, but he could not stand to give away another title.

"He was getting double- and triple-teamed, and in order to win a championship, he needed to make better decisions with the ball," says former Thunder point guard Kevin Ollie, now the head coach at Connecticut. "He needed to find other things he could do besides force up shots. That was the incentive to change his pattern." Over several weeks Zormelo and Durant formulated a written plan focusing on ballhandling, passing and shot selection. They were transforming a sniper into a playmaker. Growing up, Durant dribbled down the street outside his grandmother's house in Capitol Heights, Md. He played point guard as a freshman at National Christian Academy in Fort Washington. He watched And1 DVDs to study the art of the crossover. "Where I'm from, you got to have the ball," Durant says. "That's how we do it. We streetball." But he sprouted five inches as a sophomore, from 6'3" to 6'8," and suddenly he was a forward. Though his stroke didn't suffer, his handle did. "I still had the moves," Durant insists, "but I dribbled way too high."

He could compensate in high school, and even during his one season at Texas, but the NBA was changing to a league where the transcendent are freed from traditional positions and boundaries. When Portland was deciding between Durant and Ohio State center Greg Oden before the 2007 draft, Texas coach Rick Barnes copped a line that Bobby Knight used when the Blazers were debating between Jordan and center Sam Bowie in 1984. "He can be the best guard or he can be the best center," Barnes told G.M.'s. "It doesn't matter. Whatever you need, he'll do." The Trail Blazers selected Oden and Durant was taken second by Seattle, where coach P.J. Carlesimo started him at shooting guard. "Kevin could be all things," Carlesimo says, but back then he was too gangly to hold his spot or protect his dribble. Brooks replaced Carlesimo shortly after the franchise relocated to Oklahoma City the following season and wisely returned him to forward.

In the summer of 2011, as the NBA and its union were trying to negotiate a new collective bargaining agreement, Durant created an endless loop of YouTube videos with his preposterous scoring binges at East Coast pickup games. What the cameras didn't show were the drills he did during daily 6 a.m. workouts at Bryant Alternative High School in Alexandria, Va., with Zormelo pushing down on his shoulders to lower his dribble. Durant even tried to rebuild his crossover, but when the ball kicked off his high tops, he hurled it away in frustration. "I'm never really going to use this!" he hollered.
But at all those pickup games, he asked to play point guard, and in downtime he watched tapes of oversized creators like Bird and Magic Johnson. "Opponents are going to do anything to get the ball out of your hands," Zormelo told him. "They're going to make you drive and pass." Durant could typically beat double teams simply by raising his arms. Even though he is listed at 6'9", he is more like 6'11", with a 7'5" wingspan and a release point over his head. The only defenders long enough to challenge his jumper aren't normally allowed outside the paint. "Most guys can't shoot over the contested hand," says Brooks. "Not only can Kevin shoot over it, he uses it as a target. If anything, it lines him up." Durant didn't distinguish between good and bad shots, because through his eyes there was no such thing as a bad one. Every look was clean. "I had to tell him, 'If you have a good shot and I have a good shot, I want you to take it,'" Brooks says. "'But if you have a good shot and I have a great shot, you have to give it to me.'"
Ballhandling drills begat passing drills. Durant saw what the Thunder could accomplish if he took two hard dribbles and found an abandoned man in the corner. With Zormelo's research as a guide, Durant identified his sweetest spots at both elbows, both corners and the top of the key. From those happy places, he is doing the Thunder a disservice if he doesn't let fly, but outside of them he prefers to probe. He moves a half step slower so he can better see the floor.

This season Durant is averaging two fewer field goals and nearly two more assists than he did in 2011, and he has practically discarded two-point shots outside 17 feet. Brooks tells him on a near nightly basis, "KD, it's time. I need you to shoot now." Says Brooks, "To extend the apple metaphor, I'm now able to put him all over and get fruit." He isolates Durant at the three-point line, posts him up and uses him as the trigger man in the pick-and-roll. When defenders creep too close, Durant freezes them with a crossover at his ankles or deploys a rip move that former Thunder forward Desmond Mason taught him four years ago to pick up fouls.

"Remember when tall guys would come into the league and people would say, 'They handle like a guard!' but they never actually did handle like a guard?" says Thunder forward Nick Collison. "Kevin really does handle like a guard." Durant has become both facilitator and finisher, shuttling between the perimeter and the paint, stretching the limits of what we believe a human being with his build can do. If his progression reminds you of someone else's, well, that's probably not an accident.

"I've given up trying to figure out how to stop him," says Rivers. "And I'm not kidding."
Durant was 17 when LeBron James invited him into the Cavaliers' locker room at Washington's Verizon Center after a playoff game against the Wizards. "That's my guy," Durant says. "I looked up to him, and now I battle him." In a sense, the 2011 lockout was a boon for the NBA because it allowed the premier performers to explore new boundaries. James fortified his dribble, and so did Durant. James developed his post skills, and so did Durant. James studied his shot charts, vowing to eliminate inefficiencies, and so did Durant. James already passed like Magic, but Durant started to pass like Bird. They hopped on parallel evolutionary tracks, advancing in the same manner at the same time. When a quote from James is relayed-"He's my inspiration. We're driving one another"-Durant nods in approval. It's as if the finest poets in the world are also each other's muses.
"I don't watch a lot of other basketball away from the gym," Durant says. "But I do look at LeBron's box score. I want to see how many points, rebounds and assists he had, and how he shot from the field. If he had 30 points, nine rebounds and eight assists, I can tell you exactly how he did it, what type of shots he made and who he passed to." Durant and James take flak for their friendship, but it is based on a mutual appreciation of the craft. They aren't hanging out at the club. They are feverishly one-upping each other from afar. "People see two young black basketball players at the top of their game and think we should clash," Durant says. "They want the conflict. They want the hate. They forget Bird cried for Magic. A friend was getting on me about this recently, and I said, 'Calm down. I'm not taking it easy on him. Don't you know I'm trying to destroy the guy every time I go on the court?'"
Oklahoma City beat Miami in Game 1 of last year's Finals and trailed by only two points with 10 seconds left in Game 2. Durant spun to the baseline and James appeared to hook his right arm, but no foul was called and Durant's shot bounced out. The Thunder did not win again, but Durant stood arm-in-arm with Westbrook and Harden at the end of the series, a tableau of defeat but also of a boundless future. Not one was over 23. Durant and Westbrook had already signed long-term contract extensions, and Harden was still a year from restricted free agency. But on Oct. 27, Oklahoma City had not agreed to an extension with Harden and sent him to Houston in a trade that threatened the very culture Durant built. For a player who attended four high schools, spent one year at Texas and one in Seattle, the Thunder signified the stability he lacked. "People tell you it's a business, but it's a brotherhood here," Durant says. "We draft guys and we grow together. We build a bond. When James left, we had to turn the family switch off."
In the first meeting after the deal, Brooks told his players, "We're not taking a step back." But everywhere else they heard otherwise. "My cousin texted me, 'I'm a Heat fan now, but I still hope you make it to the Finals,'" Durant recalls. "That's my family! That's my cousin!" He shakes his head at a small but lingering act of betrayal. "A lot of friends from home were talking about other teams, and I thought they were on our side. I don't want to be angry or bitter, but it started to build up, and I took it out on my teammates." Previously, if power forward Serge Ibaka blew a box-out, Durant would tell him, "It's O.K. You're going to get it next time." But the stakes had risen. "You want to get to the Finals again, and you think everything should be perfect, and it's not," Durant says. "So I'd scream at him and pump my fist."
Durant has picked up 12 technical fouls this season, more than twice as many as his previous career high, and he was ejected for the first time, in January, after arguing with referee Danny Crawford. "I'm rubbing off on him," says Thunder center Kendrick Perkins, who keeps a standing 2 a.m. phone call with Durant every night to discuss the state of the team. "He's getting a little edge on." The techs dovetailed neatly with Nike's "KD is Not Nice" marketing campaign, but they still don't fit the recipient. Even after the ejection, Durant stopped to high-five kids sitting over the tunnel. "People get it confused and think you have to be a jerk to win," he says. "But we all feed off positive energy. I'm a nice guy. I enjoy making people happy and brightening their day. If someone asks me for an autograph on the street, I don't want to wave him off and tell him, 'Hell, no.' That's not me. The last few months I've calmed down and had more fun. We can still get on each other, but there's another way."

Without Harden, Oklahoma City needed a new playmaker, and Durant had spent more than a year preparing for the role. He just didn't realize it at the time. "They were looking for somebody else to move the defense and handle the ball in pick-and-roll," says a scout. "It turned out to be him." When Durant was 20, the Thunder asked him to act 25, and now that he is nearly 25, the plan for his prime has come to fruition. He is the NBA's best and perhaps only answer for James. "I've given up trying to figure out how to stop him," said Celtics coach Doc Rivers. "And I'm not kidding."

On Nov. 24, four weeks after Harden left, the Thunder were a respectable but unremarkable 9-4 and nursing a five-point lead with one minute left in overtime at Philadelphia. Durant posted up on the right wing, bent at the waist, a step inside the perimeter. Dorell Wright, the unfortunate 76er assigned to him, planted one hand on Durant's rib cage and another on his back. "What do I tell a guy in that position?" asks an NBA assistant coach. "I shake his hand and say, 'Good luck.'"

Durant faced up against Wright, tucked the ball by his left hip and swung his right foot behind the arc, toe-tapping the floor like a sprinter searching for the starting block. Durant had scored 35 points, but on the previous possession he fed Westbrook for a three, and on the possession before that he set up a three by Kevin Martin, who had arrived from Houston in the Harden trade. It was time for the Durant dagger, but before he shimmied his shoulders and unfurled his arms he spotted guard Thabo Sefolosha, ignored in the left corner. Sefolosha was 1 for 6, and in the previous timeout Durant had told him, "You're going to make the next shot." Durant could have easily fired over Wright and finished the Sixers, but he let his mind wander to the ultimate destination, seven months away. I'm going to need all these guys to get to the Finals, he thought.

Durant took one dribble to his left, and center Lavoy Allen rushed up to double him at the free throw line. He dribbled twice more, to the left edge of the key, and two other Sixers slid over. Surrounded by four defenders, Durant finally shoveled to Sefolosha, so open that he feared he might hesitate. He didn't. Durant jabbed him in the chest as the ball slipped through the net.

How about them apples?

Read more…

6 Key AFL Ideas

The 2008-2009 school year was my school and school system's first year exploring Assessment FOR Learning/Formative Assessment. It was definitely a learning year for all of us.

Over the summer of 2009 I spent some time thinking back on what I had learned about AFL during the year. I thought about conversations that had occurred on our school's AFL Committee. I thought about time spent with individual teachers as we worked together to implement AFL practices into their classroom. I thought about articles and books I had read, videos I had watched, and many other AFL-related staff development opportunities in which I had participated.

The result was that I entered the 2009-2010 school year with a much greater appreciation for AFL. I had come to see how all-encompassing it really was - how it could truly impact our entire approach to instruction. I also realized that it was very easy to have misconceptions of exactly what AFL is all about.

All of that led to what I call my 6 Key AFL Ideas. When one understands and can apply these 6 ideas, AFL will have a positive impact on instruction and learning. However, when any of these ideas are missing or not understood, it seems to me that AFL loses its effectiveness or perhaps isn't even present.

6 Key AFL Ideas
1. Assessment and grading are not the same thing.
2. There aren’t AFL assignments and non-AFL assignments.
3. AFL provides a framework or reason for why we do what we do.
4. Assessment for LEARNING as opposed to Assessment for GRADING.
5. We learn from our mistakes.
6. Students need to know what they need to know so they can know if they know it.

Now let me explain in a little more detail what each of these ideas means:

1. Assessment and grading are not the same thing.
Try not to get into your mind that AFL means changing or altering the way you grade. AFL means assessing to help students learn. This can be done without grading. However, if you don’t grade well you can negate your AFL efforts. In other words, if you use all sorts of assessments to provide feedback to students and as a result your students learn, but then you grade in a way that causes their grades to not be reflective of their learning, then the AFL was negated by the grading practice. While assessment and grading are not the same thing, you must be willing to grow as needed in your grading practices as you grow in your assessment practices. But remember - when one speaks of assessing students it doesn't have to mean grading students.

2. There aren’t AFL assignments and non-AFL assignments.
AFL is HOW you USE assignments, not what assignments you use. Something has an AFL purpose if you
and/or the students use the feedback to further learning. All assessments can be used for an AFL purpose. AFL doesn't mean you will have to completely change the types of assessments you use. What it means is that you will be very cognizant of how frequently you assess so that you can provide very regular feedback to students.

3. AFL provides a framework or reason for why we do what we do.
AFL is a philosophy. When we attach a name or meaning to what we do, we are more likely to do it. A lot of people hear about an AFL strategy and say, "I already do that." But here's the thing - why do you do that? Education is not an exact science. Many of us stumble on certain activities or procedures that work. But do we understand why they work? If we have a governing philosophy for WHY we do things, then we are more likely to continue and even increase our doing them. Instead of doing something because we've always done it, we instead do it because it falls into our governing philosophy. This will most likely lead to that practice being enhanced and more practices like it being added to our toolbox.

4. Assessment for LEARNING as opposed to Assessment for GRADING.
Don’t be afraid to assess and not grade. Think of other ways to give feedback besides a traditional grade. Don’t get locked into the idea that you must average all feedback in order to determine a grade. Just because you give some sort of feedback doesn't mean the "grade" has to count into the whole. That is a box that educators find themselves in too often. It results in us grading student practice too much. The student ends up learning because of our teaching, but then gets a grade lower than their learning because of our grading. Assess for the purpose of learning.

5. We learn from our mistakes.
When a student makes a mistake in your classroom (does poorly on an assignment) can that mistake be used
for instruction and learning? Or does it always inherently lead to a lower grade and, therefore, discourage
learning? We all know that in life we usually learn the most from our mistakes. Too often in education students don't have the chance to demonstrate that or to erase their mistake. If students realize that they can learn from mistakes and then fix them then they will be more likely to take chances.

6. Students need to know what they need to know so they can know if they know it.
I have come to view this idea as perhaps AFL at its most potent form. If we use AFL properly we can empower students to take control of their learning. If students are regularly - preferable daily - given assessment feedback and then taught how to use it, they are more likely to grow into the types of learners we want them to be. They will gain skills that will carry them beyond us and into future learning experiences. Consider using rubrics on a regular basis. Let your students know your thoughts on AFL. Explicitly describe why you are assessing and doing what you do. Encourage/teach/require them to assess themselves.


I hope those 6 ideas make sense and that they help you out as you try to apply AFL to your classroom. Let me know if you have any questions or thoughts.
Read more…

How do you really know if you taught "it"?

Note to teachers from Salem High School: This is a post about teaching, teachers, and students in general as opposed to a post about specific situations at Salem High School.

So after all the lesson plans have been created, all the class time has been spent, and all the papers have been graded, how do you really know if you've taught your content well?
I might get under some people's skin with this post, but I want to get us to really think about our profession and WHY we teach.
So what's the answer to the question of how we know if we have taught our content well? If we're really going to live up to our calling, we must answer it this way: We know we have taught our content well if all our students have learned it and their grades reflect this.
Let's clear up one misconception before it has a chance to grow - our job is not to make sure all students get good grades. Our job is, however, to make sure that all students learn our content. That's the whole point of being a teacher - to get students to learn. It's also our responsibility to grade in a way that reflects the amount of learning. So while good grades are not our focus, learning is. And when learning occurs, if we grade properly, good grades will follow.
Ok, let's clear up another misconception before we proceed - saying that it's our job to make sure that all our students have learned does not absolve students of their role in the learning process. Obviously poor decisions by our students will end up impacting the amount of learning that occurs. However, we cannot control their decision making. We can, though, control how we teach and how we grade. Therefore, if what we are doing is not leading to the mastery of content, and if our grades are not accurately reflecting the level of mastery reached by our students, then it is incumbent upon us to do something about it. There is no room in education for complacency. Our attitude must be that IF THEY HAVEN'T LEARNED IT, THEN WE HAVEN'T TAUGHT IT.
I remember taking Macro-economics in college. Without going into too much detail, suffice it to say that while the professor may have "known his stuff", he was an absolutely lousy teacher. There must have been about 400 students in the class. I was only taking the class Pass/Fail. I really felt bad for my classmates as I looked at the posted grades after each test we took. I remember earning a 60 on the mid-term and having it curved to a B+. I really didn't care since it was Pass/Fail, but I remember thinking what a joke it was to say that this person was teaching. Obviously many of the students - myself included - were not putting the amount of effort into the class that we should, but how could that professor be satisfied with himself knowing that almost none of his 400 students were mastering the content in his course?
I envisioned this professor sitting with his colleagues in the departmental office complaining about "college students these days". While I wasn't around in his day, I really doubt there ever was a time when college students enjoyed boring lectures, no descriptive feedback, and undecipherable tests. We didn't learn it, and he didn't teach it.
So what is an appropriate level of failure for your students? Should you be satisfied if 70% master your content? 80%? 90%? While it's important to keep a certain level of reality mixed in with your idealism so that you don't go crazy, WE MUST HAVE THE ATTITUDE THAT WE ARE GOING TO STRIVE FOR 100% MASTERY. Notice I said strive. This means we will not be complacent. We will continue to tweak, change, try, experiment, etc. to always try and bring more students to mastery level.
This is where Assessment FOR Learning has it's greatest power. To some degree, it saddens me when teachers have difficulty incorporating - or worse, don't try to incorporate - an AFL philosophy into their teaching. The reason is because an AFL philosophy will lead to greater content mastery. To not incorporate AFL strategies into one's teaching is to be satisfied with the fact that you're not doing the best that you can to teach your students. Let me give an example of what I mean:
If you "teach" content and then give a summative assessment (a traditional test, for example) without lots of assessment along the way, you know what will happen. The students who are very dedicated workers and/or the students who can sit in class and "get it" will do very well on the test. The students who do little to no work outside of class or who can't just sit in class and "get it" will do very poorly. Another group of students will score somewhere in between. For years, teachers have satisfied themselves with this outcome by "blaming students". In other words, because some students almost always do well, the teacher convinces himself or herself that all students could have done well if they had either worked harder, paid more attention, or were more academically gifted. The teacher "knows" he or she taught the content because SOME students have mastered the content. This is a convenient defense strategy for teachers as it absolves teachers of the responsibility of making sure that students learn. YES, students have a role in it (as stated earlier), but we can't control all of their decisions. We CAN control how we teach, though.

The scenario in the above paragraph is very common in schools. Essentially, it is being satisfied with the bell curve of life. As educators, we have the privilege of smashing the bell curve. We have the opportunity to be the "difference-maker" in a kid's life. Too often this opportunity is squandered as we sell short our ability to alter the outcome of a student's learning. AFL - formative assessment - can be a powerful tool in our attempt to maximize that opportunity. And it really doesn't require much additional work on our part.
Take the example from 2 paragraphs above. If instead of "teaching" and then giving a summative assessment, the teacher would instead assess EVERYDAY, then an incredible difference could be made in the typical bell curve outcome. For example:
  • If everyday the students left class knowing what they know and being aware of what they have not yet mastered - this happens because of specific classroom assessment activities led by the teacher - then students will perform better on the summative assessment. Have you ever experienced a situation as a student where you thought you knew what was going on until you took the test? You studied, and you thought you understood the content. Then you took the test and realized you didn't know it at all. This is all too common - but it shouldn't be. If this is happening to students in your class then you need to apply more AFL strategies. This is a clear sign that you need to provide activities that require your students to assess themselves throughout the learning process so that they are acutely aware of how well they're doing and what they need to do to prepare for the summative test.
  • If students were quizzed/tested/assessed repeatedly leading up to the summative assessment, then the summative assessment would not catch them by surprise. Do you ever hear your students complain that they understood the content but were surprised by the types of questions on the summative test? Unfortunately, this is a common occurrence as well. It's a clear sign that a teacher has not employed an AFL philosophy. AFL is about using assessment FOR learning. Teaching and then giving a summative assessment only is AFG - Assessment FOR Grading. It's using assessment to find out how much people know. While this has to happen eventually - there is nothing wrong with a summative assessment - it does little to help the learning process. If students are assessed regularly - DAILY - then the feedback from the assessments will actually help them learn - thus the name, Assessment FOR Learning.
Let's clear up 2 more misconceptions:
  1. But what about students who still refuse to work? They could still come into class completely unprepared and fail the assessment? Of course. But they are also the outliers. Let's focus on the majority of students - the ones who do what we ask. Let's not lose a good strategy just because a few students continue to make bad decisions. HOWEVER, I would contend that those poor decision-making students would learn more if they were assessed daily and provided with opportunities to assess themselves - even if they didn't work hard outside of class.
  2. But what about rigor? Shouldn't a rigorous class by its very nature lead to a bell curve of sorts? The rigor in a class should not be demonstrated by the student grades that result. The rigor of the class is inherent in the difficulty of the content. However, assuming that the students who are in the class have been properly prepared and have academic strengths on par for the class, then there is no reason that students shouldn't enjoy great success in a rigorous course. Our job as teachers is to get students to learn. That is no less true in a rigorous class than it is in a "general level" course. Unfortunately, it is common for teachers in rigorous classes to feel that the rigor of the course justifies the lack of success of some students. Again - grades aren't the goal. Learning is. But if AFL strategies can lead to students in rigorous courses getting higher grades that are reflective of increased learning, then how could we not employ those strategies?
So, how do you know if you've taught your content? You know it if your students have learned it. And AFL strategies will help increase that learning - which is, after all, WHY we teach.
Read more…

Ok - which would motivate you more... A chance to win a date with Angelina Jolie or a chance to win a date with Brad Pitt?

Weird question, right?  I was watching a TV discussion about Hollywood's "most beautiful couple", and for some strange reason, I saw an educational corollary buried beneath it.  

Here's the point: If you would be motivated by a chance to win a date with Angelina Jolie, then a chance to win a date with Brad Pitt probably wouldn't do much for you.  And if you'd do anything for a date with Brad Pitt, you probably don't care too much about a chance to go out with Angelina Jolie.  This got me thinking about external motivators and how they're often used - or misused - by educators.  

External motivators don't cause people to be motivated if they don't already care about the external motivator.  External motivators don't create new motivation - they just reinforce motivators already in place.

The purpose of this post is not to encourage or discourage the use of external motivation.  The purpose is to challenge educators to look at such motivators with a dose of reality - not all motivators will work for all students and not all are appropriate to use in all situations.

As I have worked with teachers around the country on the topic of assessment and grading, it is rather easy to help people reach a level of cognitive agreement with the concept of making sure that a grade assigned to a student represents mastery.  But many teachers struggle with the fear that if they assign grades based on mastery they will lose the "carrot and stick" of rewarding with points or assigning low grades and zeroes.  I don't pretend to have the answer to every hypothetical or potential grading and assessment situation - and I definitely don't believe there is a one size fits all solution that works in every class with every student.  But I do know the following to be true:

  1. Students who routinely do not turn in work or make up missed assignments tend to not be motivated by the fear of the zero or the low grade - or they would have done the work in the first place.  I'm not suggesting that a zero or low grade couldn't be appropriate in certain situations, but we just shouldn't fool ourselves into thinking that this external motivator was ever working with these students to begin with.  To tell a routine "zero-getter" that he'll receive a zero if he doesn't turn in his work would be like telling me I'll lose out on a date with Brad Pitt if I don't do my work.  BTW - I didn't mention this earlier but I would be much more motivated by a chance to win a date with Angelina Jolie!
  2. Students who already care about their grades are the ones motivated by grades and earning points because they already care about those external motivators.  Over the years teachers have used grades as carrots and sticks with these students to encourage compliance.  However, these are the students who drive us crazy when they seem to only care about are earning points rather than learning content.  So using grades as the primary motivator to get these students to do work is a problem for another reason - it promotes the idea students have that points are more important than learning.
  3. When grades are used as inappropriate carrots and sticks - v. appropriate - then grades become falsified.  Rather than communicate mastery, they begin to represent how hard a student worked or how much they worked instead of what they learned.  This is unacceptable - unless your goal is for the grade to represent effort more than or as much as mastery.

It's really hard to blame teachers for using grades as carrots and sticks.  After all, it's been done this way forever.  We are all products of an educational system that operates as though everyone is motivated by the same external factors and that trains students to only work for external motivators.  Our teachers did this - our universities taught us to teach this way - our school divisions' grading systems are usually set up this way - it's just the way it's always been.

But that doesn't mean it has to stay this way.

While it's fine to come up with carrots and sticks that work in your classroom with individual students it's not fine to:

  1. Have a "bag of motivational tricks" so limited that we end up trying to use tricks we know won't work with certain students instead of searching for other ways to motivate, inspire, encourage, and successfully demand that students work.
  2. Foster the misguided idea that collecting points is more important than learning.
  3. Assign final grades that we know do not reflect content knowledge and skills gained as a result of our excellent teaching.

I really don't have specific answers to share - just some things to think about.  

If you think there might be a better way, then you can't keep doing what you've always done.  If you're wanna change, then you gotta change.  Don't expect to keep everything the same except for your allocation of points and then see a revolution in your classroom.  If you're looking for a place to begin, try exploring the concepts of Standards Based Learning.  I'd suggest taking a look at some of the resources on http://rickwormeli.net and following the Twitter Chat #SBLchat - Wednesdays at 9:00 pm EST.

I apologize if I've muddied the waters more than I've made them clearer, but sometimes answers aren't simple.  Asking questions, though, is essential.  Try asking yourself these:

  1. Do I try to use grades and/or points to motivate?
  2. Does it work the way I want it to?
  3. Does it lead to falsified grades (grades that don't represent mastery)?
  4. Would there be other external motivators I could use with my students besides grades and points?

 

If they don't want to hang out with Brad, see if they'd rather hang out with Angelina...

Read more…

AFL, Art Class, and Failure Management

Sometimes you pick up little nuggets of wisdom when you least expect it...

 

I'm sitting in a hotel room in Williamsburg, VA.  Tomorrow is the start of the annual VASSP conference.  I ate dinner at Sal's Ristorante (lasagna - not bad, but not great) and decided to read a little before going to bed.  I picked up one of the books that I've been reading lately, John Ortberg's If You Want to Walk On Water, You've Got to Get Out of the Boat - long title, but excellent book.

 

While Ortberg's book is not specifically about education or the classroom, it deals a lot with fear and failure - 2 topics that do play a major roll in education.  On page 148, Ortberg writes the following:

 

...another important part of failure management - taking the time and having the courage to learn from failure.

 

A book called Art and Fear shows how indispensably failure is tied to learning.  A ceramics teacher divided his class into 2 groups.  One group would be graded solely on quantity of work - fifty pounds of pottery would be an "A", forty would be a "B", and so on.  The other group would be graded on quality.  Students in that group had to produce only one pot - but it had better be good.

 

Amazingly, all the highest quality pots were turned out by the quantity group.  It seems that while the quantity group kept churning out pots, they were continually learning from their disasters and growing as artists.  The quality group sat around theorizing about perfection and worrying about it - but they never actually got any better.  Apparently - at least when it comes to pottery - trying and failing, learning from failure, and trying again works a lot better than waiting for perfection.  No pot, no matter how misshapen, is really a failure.  Each is just another step on the road to an "A".  It is a road littered with imperfect pots.  But there is no other road.

 

The AFL principles just jumped off the page at me.  This story obviously applied to an art class - or any other class in which something is produced - but I really think it applies to every single classroom in our schools.  Failure is a tool for success.

 

This story brought the following questions to mind:

  1. Do you give your students enough practice?
  2. Do you give your students enough opportunities to fail?
  3. How could failure (from trying) help your students?
  4. Do you ever try to prevent your students from experiences failure (from trying)?
  5. How could you better explain to your students the importance of failure (from trying)?
  6. How could you better explain to your students' parents the importance of failure (from trying)?
  7. Does your grading system allow for students to learn from failure?
  8. Does your grading system penalize students for failure?
  9. How could you help your students learn from their failures?
  10. Along with opportunities to practice, do you also provide appropriate feedback students know if they are failing? 
  11. What could you do to create a culture of failure - (risk-taking and trying) - in your classroom? 

 

I want to encourage you to consider how, in the spirit of AFL, you can embrace appropriate failure in your classroom.

 

Any thoughts? 

Read more…

The Power of Asking "Can You"

My daughter, Kelsey, is an eighth grader at Andrew Lewis Middle School where she, as her sister before her, is blessed to have Beth Swain as her Geometry teacher.  

 

Geometry is proving to be a challenging class for Kelsey.  She is very intelligent and a hard-worker, and while Math is and always has been her favorite subject, she's starting off slower than normal in Geometry.  Thankfully, Mrs. Swain uses the kind of AFL strategies that help young people master content.  

 

So far, Kelsey's Geometry class has had 3 large tests.  Kelsey scored a D when she took the first test.  In many classrooms a large test like this would be used as a summative assessment; however, Mrs. Swain uses tests in a formative/AFL manner.  This means that the D was not the end of the story.  The grade could still improve since the purpose of the assessment was to promote learning as opposed to the purpose being to provide a grade.  Mrs. Swain chooses to use even large chapter tests formatively - like check-ups - rather than summatively - like autopsies.  After taking the first test, Kelsey's class was allowed to perform a "test analysis" that led to her mastering the content and earning a 95 A on the test.

 

Then came the second test.  Again, the content was not easy for her, but she worked hard.  Kelsey scored a C on that test.  Again, Mrs. Swain used the test in AFL manner, and Kelsey again was able to perform a test analysis which resulted in her understanding the content better and earning a B+.

 

So this brings us to the third test and the power of asking "Can You?"   On Monday, October 31, Beth Swain communicated the following message to parents via email:

 

Good afternoon!  The chapter 3 test will be this Friday with the vocab test being on Thursday.  To help students prepare for the test, they were given a "Can You"? sheet today.  If they can answer yes to all the "can you.." questions on the sheet by Thursday night then they should be prepared for the test.  If they can't answer yes then they need to practice those concepts so that they fully understand them.  Please make sure your child is making use of this sheet as they prepare for the test. 
As always, I am available in the mornings to help them if they need me.

As a parent, I was so encouraged to receive this email.  I don't know if your kids are like mine, but there seem to be a few standard answers to the questions my wife and I ask.  Those answers seem to be "Nothing" and "I Don't Know."  It's always nice to hear from a teacher information that allows me to ask more effective questions.  In this case, I was able to ask Kelsey, "How are you doing on your 'Can You' sheet?"  All week I was able to encourage Kelsey to make sure she was using the "Can You" sheet as it was intended.

 

More importantly, though, was the fact that this "Can You" sheet and the way Mrs. Swain used it enabled Kelsey to take better control of her own learning and studying.  She was given a tool that assisted her in assessing herself on a daily basis and then making decisions based on the feedback she received.  

 

So on the first test Kelsey scored a D the first go around.  On the second test, Kelsey scored a C the first go around.  On the third test - the one with the "Can You" sheet - Kelsey scored a B+ the first go around.  She told me that she felt much better heading into that test than she had on the previous two.

 

AFL strategies are rarely "revolutionary".  Rather, they are often as simple as asking students "Can You".  It's very encouraging to see teachers using strategies like this that empower parents to assist their children and that train students to assess themselves and to take ownership of their own progress.   

 

(For some other similar examples check out Using a Review Sheet in an AFL Manner and A Self-Assessment Rubric for Math.)

Read more…

Student Self-Assessment

If you've read much on this Assessment FOR Learning site you're aware of the 4 components of The Heart of AFL.  One of those key components is that students will use feedback to guide their own learning on both a short- and long-term basis.  

This concept often causes educators to roll their eyes as they think to themselves, "No student of mine ever asked for feedback to guide his learning!"  It often seems like students either don't care about their learning or only care about it to the extent that they collect enough points to receive a high grade.  

If we're not satisfied with this - if we want students to take ownership of their learning instead of being disengaged or only care about point accumulation - then we need to provide them the tools they need to reach a higher level.  One of the reasons this entire site exists is to provide teachers with the assessment tools they AND their students need to learn - and learning is what WE care about much more than points and grades.

Recently, David Wallace, an art teacher at Salem High School, shared with me this tool he has created to help his students take ownership of their learning.  He calls this specific tool a Project Report.  (A copy of the Project Report can be found by clicking on the words "Project Report" or by scrolling to the bottom of this post.)  He has slightly different yet similar tools for different purposes, but the goal is always the same.  Students in his class are trained to assess how much they know before doing something and then trained compare that to how much they know after completion of the project.  Furthermore, they are trained to assess the results of their work.  With a tool like this, students can better determine what they need to do in order to improve.

Notice how I keep using the word "train"?  This is exactly what a great teacher does.  Students rarely enter the room with all the tools they need for success.  It is our job as educators to train them.  Training must be specific and include how to use the tools they need.  Simply telling students they ought to keep up with their progress is not enough.  We must give them the tools to do so, train them to use the tools, and then require that they do so.

Mr. Wallace's Project Report was obviously designed for an Art classroom, but I bet you can figure out how to apply a tool like this to whatever content area or grade level you teach.  Got any ideas?

11148392474?profile=original

Read more…

Getting and Giving Student Feedback

Check out this article by Heather Rader on getting and giving student feedback, a concept that is central to AFL!Hmmmm... the hyperlink feature does not seem to be working! Here is the full text:Getting and Giving Student FeedbackHeather RaderI saved this quote from an email with the title "Why We Love Children":A little girl had just finished her first week of school. "I'm just wasting my time," she said to her mother. "I can't read, I can't write and they won't let me talk!"I'll be the first to admit I enjoy the sound of my own voice. I love to tell stories. I love it when people laugh at just the right part or when I scan the room and I have all eyes on me waiting for the next line. But I also enjoy a good Malbec wine, and I know too much of that isn't good for me either.I learned to pipe down in my personal life about eight years ago when my middle daughter, Maya, began to stutter. When she was unable to get through a short sentence without bursting into tears, we visited a speech specialist. My homework assignment was to record our dinnertime conversation. If a normal conversation has a typical number of verbal demands, in our family it was four times the expected amount. My husband and I talk a lot and Maya's older brother was a motor mouth. While her vocabulary development was three years ahead of her chronological age, she still had the brain of the three-year-old that was unable to keep up with the verbal demands. As I took this in, I paraphrased the speech specialist, "Basically the issue isn't Maya's brain or speech - it's us that need some shut-up therapy." The specialist was sweet; she just smiled and said nothing.Recent research finds that feedback is most effective when teachers understand how students are making sense of their learning experiences. John Hattie in his book Visible Learning states, "The mistake I was making was seeing feedback as something teachers provided to students. . .It was only when I discovered that feedback was most powerful when it is from the student to the teacher that I started to understand it better. When teachers seek, or at least are open to, feedback from students as to what students know, what they understand, where they make errors, when they have misconceptions, when they are not engaged -- then teaching and learning can be synchronized and powerful. Feedback to teachers helps make learning visible."When I consider who is the best educated and the most experienced thinker in the classroom, the answer is almost always the teacher. If I am understanding how the students are making meaning, I can adapt the questions, lessons and interventions. The only way for me to have access to that information is to get it in the form of kid talk - lots of it and in writing too. Schema, 10:2 Theory and Exit Slips are ways to constantly seek feedback on students' understanding.SchemaA friend of mine, Nari, is a student support manager and was working with the kindergartners on the playground."Please don't run on the cement," she said."Okay!" said a five-year-old as she was running off."Please don't run on the cement," she said again."Okay!" said another kindergartner. "Wait - what is cement?"We laughed because those sweet kids were more than willing not to run on cement; they just didn't know what it was. Because we aren't five or seven or even fifteen anymore, we can't know what's in kids' heads or how they are comprehending the information they are taking in.Two ways to quickly assess schema is to use the quick-sketch or quick-write method. Because I'm not 10 in the year 2010, I know I have different schema for the word clustering that I'm going to teach as a prewriting technique to fourth graders. When I think of clustering, clusters of grapes come to mind, but I ask students to draw a quick sketch on a piece of paper for 30 seconds of what comes up when I say cluster. They think of chocolate peanut clusters, video game clusters, bomb clusters and more. Some have no associations at all. When I take a moment to connect grapes to peanut clusters to video game and bomb clusters and point out that all of those examples have similar elements bunched together and that's what we are going to do in writing, I'm connecting to their experience and supporting meaning making.10:2 TheoryTen and two (10:2) theory is based on the idea that students make sense of new information by periodically integrating it with existing information. As learners, we naturally take mental breaks to absorb information even as more information is presented. Mary Budd Rowe (Journal of Teacher Education, 1986) explains how teachers can provide regular pauses to accommodate this need. She recommends we to pause for two minutes about every ten minutes (thus the 10:2 theory).Understanding this idea in theory and actually putting it into practice are two different things. Talking faster to cram more in the ten-minute window or simply directing "now turn and talk to integrate what you've learned into your existing thinking" are not highly effective. I plan my lessons thinking about the rhythm of teaching and learning--like breathing--with this theory in mind. Exhaling is the short minilesson on vivid verbs, and inhaling is when the kids turn to a partner to paraphrase. Exhaling is modeling how to develop a personal list of vivid verbs to use in writing, and inhaling is having the students start their own lists.Each time I inhale, I'm providing students with the opportunity for talk, writing and feedback. I use a timer to raise my awareness of the pacing and try to keep the new information input under ten minutes before shhhh. . .letting the kids make meaning.Exit SlipsThese are also known as "did they get it?" receipts and I use them often. My favorite question to ask is, "What was the most important thing you learned in ________(subject) today?" Here was a response that I received after a revision lesson: "I learned that revision is checking your spelling." Another good one: "I learned that elaboration is writing many words in a sentence." Even better: "I learned that a summary is copying down what was already written."Yes, my response is to clap my hand on my forehead and moan, but when I'm done doing that, I'm thankful for the informal assessment of student understanding. The clearer I am about students' thinking and misconceptions, the less likely I am to fall under the illusion that everyone is getting it. I use exit slips as five-minute quick-writes that can be preceded by talk to help students reflect on their learning and critical thinking. Most often I use them at the end of the lesson, but they can also be used as we transition during the lesson.Here are a few other questions/prompts I've used:• I understand…but I do not understand…• One question I have is…• Three words/phrases I heard a lot during this lesson were…• I know ________ is true because…• I smiled/frowned today when…For students who are not writing words or sentences yet I've used:• Draw a picture of yourself learning today.• Draw a picture of what your face looked like when you learned _____.• I could/could not (circle) tell a friend about what I learned.• The important thing about prewriting is ______.While it may sound like a Geico commercial, five minutes spent on feedback before, during and at the end of the lesson can save. . .a lot. After a lesson that doesn't quite work, I always ask myself:How did I connect to the students' schema?Did I give them multiple opportunities to talk, write and think?What did they take away from the learning experience?How do I know?Heather Rader is a writer and teacher who has landed her dream job as an instructional specialist for North Thurston Public Schools (Washington). She's taught all grades K-6 and now enjoys teaching adults and collaborating as an instructional coach. Her motto is "stay curious" for all that life has to offer.
Read more…

A Letter to the Editor from Rick Wormeli

Recently, several letter writers to the Forest City Summit, an Iowa newspaper, have disparaged standards-based grading.  Specifically, they disparaged Rick Wormeli's work in that field.  As a result, Mr. Wormeli wrote a response to those letter-writers, and the newspaper agreed to run it.

While I am personally unfamiliar with the events in Forest City Schools, IA that led to these letters being written, public arguments like this over grading issues always cause me to wonder if the school division employed too much of a top-down method of improving assessment strategies.  

At its heart, standards based learning really shouldn't be controversial.  Learning should be measured against standards and communicated in terms of standards so that grades actually represent learning and, more importantly, so teachers and students know where to focus their instructional and learning efforts.

When individual teachers implement solid and well-communicated SBL strategies, students tend to appreciate the descriptive and helpful nature of the feedback.  Students tend to appreciate knowing where their strengths and weaknesses are so that they can then focus on improving where necessary.  And typically, when students appreciate what is going on in class and feel like it helps them learn, parents are supportive.

However, when policies are implemented at the division-level and then required or mandated it is not uncommon to create controversy where none need exist.  I would encourage schools and divisions to focus on a meaningful professional development journey - to take the long view approach - instead of looking to change practices by changing policy.

Again, I do not know what exactly went on in this Iowa school district, but I do know that educators exploring the merits of standards based learning would benefit from reading Mr. Wormeli's letter.  

Here's a link to the letter in its original form on the Forest City Summit's website: 

http://globegazette.com/forestcitysummit/opinion/letter-to-the-editor/article_937be5bc-b62a-5874-aec1-d4053dfff9f3.html

Below is the same letter copied and pasted into this blog:  

To the editor:

In recent letters to the editor in the Summit, my work was mentioned as one catalyst for the shift in grading practices in Forest City Schools from traditional to standards-based grading. Many of the claims made by the authors misrepresent me and these practices, however, and I’d like to set the record straight.

Most of us think the purpose of grading is to report what students are learning, as well as how students are progressing in their disciplines  It is important for grades to be accurate, we say, otherwise we can’t use grades to make instructional decisions, provide accurate feedback, or document student progress.

These are wise assertions for grading. Nowhere in these descriptions, however, is grading’s purpose stated as teaching students to meet deadlines, persevere in the midst of adversity, work collaboratively with others, care for those less fortunate than ourselves, or to maintain organized notebooks. While these are important character attributes, we realize that none of the books or research reflecting modern teaching/parenting mentions grading as the way in which we instill these important values in our children.  

We actually know how to cultivate those values in others, but it isn’t through punitive measures and antiquated notions of grading. Author of Grading Smarter, Not Harder (2014), Myron Dueck, writes,

“Unfortunately, many educators have fallen into the trap of believing that punitive grading should be the chief consequence for poor decisions and negative behaviors. These teachers continue to argue that grading as punishment works, despite over 100 years of overwhelming research that suggests it does not (Guskey, 2011; Reeves, 2010).”

In 2012, researcher, John Hattie, published, Visible learning for Teachers: Maximizing Impact on Learning, with research based onmore than 900 meta-analyses, representing over 50,000 research articles, 150,000 effect sizes, and 240 million students.  He writes,

“There are certainly many things that inspired teachers do not do; they do not use grading as punishment; they do not conflate behavioral and academic performance; they do not elevate quiet compliance over academic work; they do not excessively use worksheets; they do not have low expectations and keep defending low quality learning as ‘doing your best’; they do not evaluate their impact by compliance, covering the curriculum, or conceiving explanations as to why they have little or no impact on their students; and they do not prefer perfection in homework over risk-taking that involves mistakes.” 

Those interested in research on standards-based grading and its elements are invited to read books written by Robert Marzano, Tom Guskey, Carol Dweck, Doug Reeves, John Hattie, Susan Brookhart, Grant Wiggins, Tom Schimmer, and Ken O’Connor. Matt Townsley, Director of Instruction in Solon Community School District in Iowa has an excellent resource collection at https://sites.google.com/a/solon.k12.ia.us/standards-based-grading/sbg-literature.

A caution about worshiping at the research altar, however: ‘Not all that is effective in raising our children has a research base. A constant chorus of, “Show me the research,” adds distraction that keeps us from looking seriously and honestly at our practices.  When we get our son up on his bicycle the first time, and he wobbles for stretch of sidewalk then crashes abruptly into the rhododendrons, we give him feedback on how to steer his bicycle, then ask him to try again. Where’s the vetted research for doing that? It’s not there, and we don’t stop good parenting because we don’t have journaled research. 

Trying something, getting feedback on it, then trying it again, is one of the most effective ways to become competent at anything. How does an accountant learn to balance the books? Not by doing it once in a trumped up scenario in a classroom. Can a pilot re-do his landings? ‘Hundreds of times in simulators and planes before he actually pilots a commercial airliner with real passengers.  How do we learn to farm? By watching the modeling of elders and doing its varied tasks over and over ourselves. How do we learn to teach? By teaching a lot, not by doing it once or twice, then assuming we know all there is. I want a doctor who has completed dozens of surgeries like the one she’s about to do on me successfully, not one who did one attempt during training.  

This is how all us become competent. Some individuals push back against re-doing assignments and tests, however, because there’s a limited research base for it, or so they claim (There’s actually a lot of research on the power of reiterations in learning). My response to the push back is: When did incompetence become acceptable? How did we all learn our professions? Does demanding adult-level, post-certification performance in the first attempt at something during the young, pre-certification learning experience help students mature?

Parents should be deeply concerned when teachers abdicate their adult roles and let students’ immaturity dictate their learning. A child makes a first attempt to write a sentence but doesn’t do it well, and the teacher records an F for, “Sentence Construction,” in the gradebook with no follow-up instruction and direction to try it again? ‘Really? We can’t afford uninformed, ineffective teaching like this. To deny re-learning and assessment for the major standards we teach is educational malpractice. Parents should thank their lucky stars for teachers who live up to the promise to teach our children, whatever it takes. 

We can’t be paralyzed by the notion put forth by Dr. Laura Freisenborg in her Nov. 25 letter of juried journals of research as the only source of credibility. Dr. Friesenborg says that there has been, “…no robust statistical analysis of students national standardized test scores, pre- and post-implementation” of the practices for which I advocate. This is disingenuous because it’s physically and statistically impossible to conduct such study, as there are so many confounding variables as to make the “Limitations of the Study” portion of the report the length of a Tom Clancy novel. We do not have the wherewithal to isolate student’s specific outcomes as a direct function of teachers’ varied and complex implementations of so many associated elements as we find in SBG practices, including the effects of varied home lives and prior knowledge. If she’s so proof driven, where is her counter proof that traditional grading practices have a robust statistical analysis of pre- and post-implementation? It doesn’t exist.

She dismisses my work and that of the large majority of assessment and grading experts as anecdotal and a fad education program, declaring that I somehow think students will magically become intrinsically motivated. This is the comment of someone who hasn’t done her due diligence regarding the topic, dismissing something because she hasn’t explored it deeply yet. Be clear: There’s no magic here – It’s hard work, much harder than the simplistic notion that letter grades motivate children.

Friesenborg diminishes the outstanding work of Daniel Pink, who’s book Drive, is commonly accepted as well researched by those in leadership and education, and she does not mention the work of Vigotsky, Dweck, Bandura, Lavoie, Jensen, Marzano, Hattie, Reeves, Deci, Ripley, de Charms, Stipek and Seal, Southwick and Charney, Lawson and Guare whose collective works speak compellingly to the motivational, resilience-building elements found in standards-based grading. Is it because she is unaware of them, or is it because their studies would run counter to her claims? Here she is distorting the truth, not helping the community.

We DO have research on re-learning/assessing (see the names mentioned above), but it’s very difficult to account for all the variables in the messy enterprise of learning and claim a clear causation. Some strategies work well because there’s support at home, access to technology in the home, or a close relationship with an adult mentor, and some don’t work because the child has none of those things. Sometimes we can infer a correlation in education research, but most of the time, good education research gives us helpful, new questions to ask, not absolute declarations of truth. When research does provide clear direction, we are careful still to vet implications thoughtfully, not dismiss what is inconvenient or doesn’t fit our preconceived or politically motivated notions.

When we are anxious about our community’s future, we want clear data points and solid facts, but teaching and learning are imperfect, messy systems, and we’re still evolving our knowledge base. Many practices have stood the test of time, of course, but it’s only a minority of them that have a strong research base. We can’t cripple modern efforts by waiting for one, decisive research report to say, “Yay or Nay.” At some point, we use the anecdotal evidence of the moment, asking teachers to be careful, reflective practitioners, and to welcome continued critique of practices in light of new perspective or evidence as it becomes available. If we’re setting policy, we dive deeply into what isavailable in current thinking and research nationwide so our local decisions are informed.  

In her letter, Friesenborg describes standards-based grading as, “radical.” Please know that it is quite pervasive with thousands of schools across the country actively investigating how to implement it or who have already done so. Most states, in fact, are calling for competency-based learning and reporting to be implemented. Friesenborg states that the Iowa State Board of Education makes standards-based learning a legislative Advocacy Priority. This is a positive thing, and SBG practices promote exactly this. We want accurate reporting. That means we separate non-curriculum reports from the curriculum reports. It helps all of us do our jobs, and it provides more accurate tools for students to self-monitor how they are doing relative to academic goals.

Such grading practices are not even close to the definition of radical. Read the observations of schooling in Greece, Rome, Egypt, Babylonia, and on through the 1700’s, the Renaissance, the 1800’s, and the 1900’s:  Grades reporting what students have learned regarding their subjects was the predominant practice. There were separate reports of children’s civility and work habits. That’s what we’re doing here with SBG, nothing else. It’s dramatically more helpful than a grade that indicates a mishmash of, “Knowledge of Ecosystems, plus all the days he brought his supplies in a timely manner, used a quiet, indoor voice, had his parents sign his reading log for the week, and brought in canned food for the canned food drive.”  In no state in our country does it say, “Has a nice neat notebook” in the math curriculum. That’s because it’s not a math principle. It has no business obscuring the truth of our child’s math proficiency.

We have plenty of research, let alone anecdotal evidence, that reporting work habits in separate columns on the report card actually raises the importance of those habits in students’ minds, helping them mature more quickly in each area. The more curriculum we aggregate into one symbol, however, the less accurate and useful it is as a report for any one of the aggregated elements or as a tool of student maturation. SBG takes us closer to the fundamental elements of good teaching and learning.

Rick Wormeli

Read more…

Does AFL lead to grade inflation?

A criticism of Assessment FOR Learning is that along with it comes pressure to make sure that students’ grades increase. In other words, some have been concernerd that AFL might lead to grade inflation. I would hope that no school would ever encourage grade inflation while it encourages its teachers to try AFL techniques. I think that that the concern over grade inflation is probably first and foremost a misunderstanding about the purpose of AFL. The primary goal of AFL is not grade inflation. I don’t know that it would ever be appropriate for educators to do things solely for the purpose of raising grades. In fact, if grade inflation was the goal then a focus on AFL wouldn’t be necessary. Many teachers already do an excellent job of grade inflation through several more traditional measures such as extra credit, dropping the lowest grade, or curving scores. These are practices that teachers have used for many years, and they all have the same outcome of inflating grades and making grades less representative of actual learning. AFL isn’t necessary for inflating grades. The primary goal of AFL is instead LEARNING INFLATION. The entire purpose of AFL is to increase learning. When teachers assess students in an ongoing manner, use that data to guide their instructional practices, and teach students how to use their own assessment data to chart their progress and to guide their studies, then it is only natural that learning will increase. Now let’s be honest, when learning increases grades tend to increase as well. That is, grades will increase if we are grading accurately while learning increases. This is why it is impossible to discuss assessing with AFL techniques without also discussing grading practices. While assessment is not the same as grading, and while not all assessments need to be graded, if teachers aren’t careful with their grading practices they can negate their assessment efforts. For example, if a teacher’s assessment practices cause a student to increase learning to a B level but the grading practices cause the student to earn a D then the incentive for learning will decrease. Once the conversation moves to grading practices it is very easy for that subject to dominate the discussion, but don’t be fooled – grading is secondary to learning. As you make plans to use AFL in your classroom, focus on this simple mantra: AFL is about how you use assessments to increase learning. Whatever types of assessments you use, use them in ongoing manner, use the data you receive to guide your instruction, and train/require your students to use their own data to guide their studies. You’ll be using AFL and learning will inflate.
Read more…

AFL Flashcard Review

It's pretty common for a teacher to finish a lesson and still have a few minutes left until the class period ends  Here is an extremely easy and practical way to turn those remaining minutes into a meaningful AFL opportunity.  

Instead of allowing students to sit and talk quietly until the bell rings, these few minutes can be used as a chance for the teacher to assess his or her students so that the teacher and the students know how well content was mastered that day - and so that they can identify areas that need improvement.  The use of AFL flashcards is a simple way to do this.

You will need to create a set of flashcards for each desk in your room.  There will be 2 cards per desk.  Card 1 will have an A on the front and a B on the back.  Card 2 will have a C on the front and a D on the back.  You might want to make a pouch out of paper and tape it to the edge of the desk.  The 2 flashcards can go in this pouch so that the students always have them handy.

Have you ever finished a lesson by asking questions about the lesson only to have very limited response from students?  Perhaps a small handful of students are answering your questions or even asking additional questions, but many in the room have mentally "checked out" and are just waiting for the bell to ring.  It seems as though the following question, "Do you have any questions about what we learned?" in student-language means "Go ahead and pack up and start forgetting everything we did".  Your new flashcards should change this situation.  

Ask all students to pull out their flashcards.  Begin asking the entire class questions about the day's content.  You could even ask about content learned on previous days.  Ask easy question, hard questions, simple questions, and complex questions.  Ask the type of questions you expect them to know for a test.  They will answer by holding up the appropriate flashcard.  You will be able to see how the class as a whole is doing and also how each individual student is doing.  The students will gain a more useful review than they would have from the normal question/answer period at the end of class, and, therefore, will be better able to assess their own level of understanding.

You could use the cards to represent various types of answers.  For example:

  • A,B,C,D could be multiple choice answers.  
  • A could equal true, and B could equal false.  
  • A could equal "I can answer that", and B could equal "I am unable to answer that".  
  • A could mean "I completely understand that topic". B could mean "I sort of understand but am not ready to take a test on it", and C could mean "I do not understand the topic".    

 



 

Read more…

An Assessment Becomes a Learning Tool

Today I had the privilege of observing a Salem High School Algebra 1 Part 1 class being taught by Jennifer Shannon. I watched a teacher very intentionally make sure that her classroom assessment - in this case the test she had given the previous day - was used as a LEARNING TOOL instead of simply a GRADING TOOL.

Essentially Mrs. Shannon allowed her students to make corrections to their tests and earn partial credit as a result. This practice is fairly common. However, Mrs. Shannon added a few wrinkles to this common practice to ensure that students were doing more than just going through the motions of making corrections and instead were actually learning content not previously mastered.

To give some context, this was a 9th grade Algebra 1 Part 1 class taught in a traditional 50 minute class period. Algebra 1 Part 1 students at SHS - there are only about 30 - are the students who struggle the most with learning math concepts.

At the start of class Mrs. Shannon divided her 14 her students into groups. She explained to them that the groups were based on the types of errors made on the test given the previous day. So for example, one of the groups consisted of students who had had difficulty on the section of the test that dealt with properties. Another group consisted of students who had had trouble simplifying radicals. Another group of students all had what Mrs. Shannon termed as general problems. A fourth group had done quite well on the test.

Mrs. Shannon had one of the groups move to a separate area of the room and work with her student teacher. Another group moved to another area of the room and worked with the special education professional that cooperatively teaches with her. Mrs. Shannon worked with the remaining two groups, which included the one that had done well on the test.

In their groups, students were to rework the problems they had missed and ask for help as it was needed. They were allowed to use their notes to help them.

The students who only had a few corrections to make were given laptops. When they finished their few corrections they were to ogin to Quia and begin working on an enrichment activity that would prepare them for the next unit of study.

So what was especially AFL-ish about this that made it stand out? Good question. Here are some answers:
  1. Typically teachers will tell students that they can take their test home and do corrections on their own. Some will. Some won't. Some will do it just to get back points but won't actually learn the content better. Some might even cheat to get the right answers. Mrs. Shannon made sure that this assessment was a learning tool by having the corrections be a classroom activity guided by teachers.
  2. Mrs. Shannon clearly used assessment-elicited evidence to design her lesson. It was from the test results the day before that she was able to group her students so that they would receive the help and instruction that they need in order to learn.
  3. The entire activity occurred because Mrs. Shannon realized from the test that the students as a whole had not mastered the content. This test gave her the feedback she needed to know that if her goal was to increase learning she was going to need to find a way to reteach some of the material. The beauty of this activity was that it then allowed her to reteach to each student only what he or she needed.
  4. The idea of earning back points was not the major focus of this activity. The major focus was learning the material. In fact, because Mrs. Shannon made this a class activity I would bet that the outcome would have been almost identical if students hadn't been able to get points. In other words, this was about learning. The test the day before was used by Mrs. Shannon NOT as a way to determine the students' grades but rather as a way to determine their learning so that she could adjust her instruction with the ultimate goal of having her students learn.
In the hands of a skilled practitioner even a traditional and a routine activity like making test corrections can become a powerful AFL learning moment.
Read more…

Would this work? (A question for Math teachers)

First, a disclaimer: I am not and never have been a Math teacher.  After teaching Modern World History for 7 years, I went to the "Dark Side" and became an administrator, so I probably don't know anything about teaching Math.

However, I do know a few things about teaching in general.  Furthermore, I have a pretty good grasp of the philosophy of AFL and how applying it in the classroom can increase student learning.  So I'm going to give this a shot.

I've noticed that one of the problems that some Math students have is that they don't practice at home.  I will in no way advocate stopping the assignment of practice to be done at home.  Practice at home is a worthy topic unto itself, but please do not read into what I am about to write that I am recommending not having students practice at home.  In fact, I would recommend assigning practice to be done at home every single night.  But if:

  1. Many of our students don't practice at home, and
  2. We realize that we cannot control what one does at home, and
  3. We believe practice is required to learn the content, and
  4. We care MOST about whether or not students learn as opposed to whether or not they make responsible decisions outside of class, then
  5. It makes sense to provide as much practice time as possible during class since that is the only time in a student's day we can control.

Of course this idea of giving time to practice in class fits very nicely with the philosophy of AFL.  The AFL teacher would give practice opportunities in class that provide useful feedback for the teacher and the student.  Frankly, I've never known a Math teacher who doesn't give students chances to practice in class.  Usually this comes in the form of a practice problem or getting started on the night's homework.  Typically the teacher will move around the room to see how students are doing and to answer questions the students might have.

There is absolutely nothing wrong with this sort of practice activity, but like everything, it does have limitations.  For example, if a student chooses to just "go through the motions" of doing the practice, then very little feedback will be received.  Also, the student who, for whatever reason, doesn't ask questions will quite possibly not learn as well as the student who does ask questions.  Finally, the teacher is only one person and almost always outnumbered greatly by students.  It can be difficult to give each student the specific feedback they need during such an activity.

One more background observation before I share my idea.  I have noticed that students often take test-like or quiz-like situations more seriously than they do other activities.  In other words, kids who will goof around and disrupt classroom practice tend - in a well-led classroom - to sit quietly and do as they're told during a test or quiz situation.

That's a lot of build up and background to an idea that's not all that earth-shattering.  In fact, I'm sure the Math teachers out there will respond by saying, "Been  there, done that!"  But I still figured I'd share a potential practical application of the philosophy of AFL to the Math classroom.

In a nutshell, the idea is to break up the Math process into steps and then give students a daily quiz on each step as they learn the process.  It would look something like this:

  • The Math process being taught is broken down into steps.  For this discussion let's assume we're learning Math Process P which is divided into 3 steps.
  • The teacher teaches Step 1 and then gives students a quiz on Step 1.  The quiz will ONLY be on Step 1 and it will be worth X points.
  • The teacher teaches Step 2 and then gives the students a quiz on Steps 2 AND 1.  This quiz will be worth 2X points.  The student or the teacher might even  choose to erase the first quiz from the grade book or set it to not factor into the grade.
  • The teacher teaches Step 3 and then gives the students a quiz on Steps 3 AND 2 AND 1.  This quiz will be worth 4X points.  The student or the teacher might even choose to erase the first two quizzes from the grade book or set them to not factor into the grade.  
  • The teacher reviews the quiz on Steps 3, 2, and 1 and then gives a unit test on all aspects of Process P.  This unit test is worth 10X points.  The student or the teacher might even choose to erase the quizzes from the grade book or set them to not factor into the grade.

Here are some more details:

  • A quiz might be given the same day as the respective step was taught.  On the other hand, a step  might take more than one day to teach.  If a step takes a few minutes to teach, then the teacher will quiz on it after giving the students a chance to practice it.  If it takes the entire class period to teach the step, then the quiz will open class the next day.  
  • If any step takes more than one day to teach, then the students will take a quiz on that step on consecutive days.
  • There will be a quiz given every day.

To me this seems like a way to make sure students are practicing in class.  For example, even if the student did no homework, he would still practice Step 1 three times before the unit test, Step 2 two times, and Step 3 one time.  Beyond just practicing the step, the student would be receiving more feedback and more direct feedback than is typically received when the class goes over practice or homework problems.  Finally, the teacher would get valuable feedback as he or she would know how each student - as opposed to just the question askers - was doing on each step.  Plus the teacher would have the specific feedback necessary to tightly focus remediation efforts, determine what might need to be retaught, and create differentiation efforts.  

So, Math Teachers, what do you think?  Could this work?  What have I overlooked?  Would this type of practice - this use of assessment for the purpose of learning - increase the likelihood of students learning?

Read more…

Fantasy Football and the Problem with Averaging

Obsessed with Fantasy Football

I have to confess something: I care way too much about Fantasy Football. Throughout the fall, I’m constantly checking my Yahoo Fantasy app, plotting my next waiver wire strategy, or looking online for updates about player injuries. I am addicted to Fantasy Football.

This year I was the champion of my Fantasy Football league. Actually, that’s an understatement. I smashed the competition!

Players in our league can win in 3 categories:

  1. Regular Season Champ: After 13 weeks, this team has the best win/loss record and qualifies for the playoffs as the top seed.
  2. Playoff Champ: This team makes the playoffs and then wins the 3 week end-of-season tournament.
  3. Total Points Champ: This team scores the most points over the course of the 16 week season.

As this year’s regular season, playoff, and total points champ, my team was the undisputed champion of the league.

My goal isn’t to brag about my prowess at Fantasy Football. (Although I have to admit I enjoy doing so…) But for this post to help educators, I first need you to understand the following: My season was the best season of anyone in my league and would be considered a dream season for anybody who plays Fantasy Football.

Then I got an email from Yahoo Fantasy Sports, our league’s Fantasy Football Platform.

A Surprise for the Champ: My Season Story

I love Yahoo’s mobile app, their player updates, and the outstanding data analysis they provide to help players make decisions. So when I received an email from Yahoo with a link to the my “Season Story” I was excited to read their analysis of my successful year.

It turned out that by “Season Story” Yahoo meant it was sharing with me an overall grade for, or assessment of, my season. Imagine my surprise when I learned Yahoo assigned a B- as the grade for my dream season! How could this be?

Grading: Yahoo-style

Much like what happens in the traditional American classroom, Yahoo had used a formula to determine my final grade. The formula averaged together the following 3 key data points:

  1. Projection and Final Standing: 40% of the Season Grade
    This compares where I ended my season with where at the beginning of the season I was projected to finish. Yahoo graded me at an A level, which makes sense. After all, I was the champion in all three of our league’s categories. Plus, I had been projected to finish 14th out of 16 teams. With this combination of overall achievement and growth, if I wasn’t an A in Projection and Final Standing, who could be?
  2. Weekly Performance: 30% of the Season Grade
    Yahoo averaged together each week’s performance to get this score. Yahoo graded me at an A- level. An A- makes sense. I could even agree with a B+. Some weeks my team was amazing. Other weeks it was good. But it was never bad.
  3. Draft Strategy: 30% of the Season Grade
    Yahoo graded me at an F level. In other words, at the beginning of the season, Yahoo didn’t think I had selected a good team. Was Yahoo correct that I picked the wrong players? That might have been a logical prediction early on. Perhaps I didn’t start the season on a strong note. Yahoo already noted that with my Projection and Final Standing category. But the evaluation of my season’s start ended up being the reason my grade was a B- at the end of the season.

11148394462?profile=original

Honestly, this grading methodology makes no sense. The purpose of the season grade should be to communicate how successful the season was. With that in mind, the only grade that should have mattered was the summative score of A representing my Projection and Final Standing. That score shows that I achieved at the highest possible level and that I grew beyond expectations. Averaging together the other data points only detracted from the accuracy of what Yahoo was trying to communicate.


Comparing Yahoo and Schools

Similarly, the common and very traditional practice of averaging together different types of student data taken at various points in time throughout a school year detracts from the ability of a student’s final grade to accurately communicate mastery of content.

Let’s compare Yahoo’s grading language to the language we use in schools:

  1. Projection and Final Standing = Summative Assessment and Student Growth
    Where a student ends up when all is said and done is the summative assessment of a student’s level of content mastery, and student growth refers to much they grow from start to finish.
  2. Weekly Performance = Formative Assessment
    All the things students do along the way - the practice that helps them learn, the homework, the classwork, the quizzes, the activities - these are formative assessments. Formative assessment’s purpose is to serve as practice and to provide feedback that helps student both grow and achieve summative mastery.
  3. Draft Strategy = Pre-Assessment
    Where a student is before the learning occurs is the pre-assessment. Pre-assessment data helps us know what formative assessments will be necessary to help individual students grow to guide each of them toward summative assessment mastery.

Lessons from Yahoo for Educators

I believe that by studying Yahoo’s methodology educators will notice the weakness inherent in our own widely-accepted traditional grading and assessment practices. Specifically, we can be reminded that:

  • Averaging past digressions with future successes falsifies grades.
    Pre-assessment data, or data that represents where a student was early in the learning process, should never be averaged with summative assessment data. The early data is useful to guide students toward growth and mastery, but it should never be held against a student by being part of a grade calculation. Otherwise, we run the risk of having the Draft Strategy dictate the Season Story despite the more accurate picture painted by the Projection and Final Standing.
  • Formative assessment is useful for increasing learning but less so for determining a grade.
    Knowing my weekly performance enabled me to make decisions to help my team improve, but my team not always performing at an A level does not detract from my team mastering its goals and growing appropriately. If, as a result of formative assessment feedback, a student makes learning decisions that brings her closer to summative mastery, why would we then base the score that represents the summative mastery on the formative feedback?
  • Formative assessment data loses value once we have summative data.
    Why did Yahoo care about my Draft Strategy and Weekly Performance once it knew my Final Standing? It’s possible that formative assessment data could be used as additional evidence of learning if we are concerned that the summative assessment doesn’t paint a complete picture, but, in general, once mastery is demonstrated, the fact the student wasn’t always at that same level of mastery becomes irrelevant.
  • It’s impossible to create the perfect formula to measure all student learning.
    Yahoo chose to use a 30/30/40 formula. Why? Some schools say Homework should count 10%. Why? Some districts say exams must count 25% of a grade. Why? Some teachers make formative assessment count 40%. Why? Some schools average semesters, some average quarters, and some average 6 grading periods. Why? There is an inherent problem with averaging. We make up formulas because they sound nice and add up to 100%, but there is no way to definitive formula for determining learning or growth. Averaging points in time, chunks of time, or data taken over time will always mask accuracy. Yet educators, like Yahoo, feel the need to try to find a formula the justify grades.
  • Using formulas to determine grades inherently leads to a focus on earning points instead of on learning content.
    In the case of Yahoo, they didn’t advertise their formula in advance. Now that I know this formula, I still don’t anticipate changing my strategy in the future because, frankly, I don’t care about my season grade. I care about winning. But students and parents are naturally going to care about grades because of the doors that grades on transcripts open or close. As long as there are final grades there will always an interest in getting good grades. When grades are the result of a formula it naturally leads to a quest for numerator points, something that may not be connected to learning. When this is the case, students ask for opportunities to earn points. When grades are a true reflection of content mastery, a focus on learning is more likely to result. In these situations, students ask for opportunities to demonstrate learning.

A Call to Action

It’s time for schools to stop being like Yahoo Fantasy Sports when it comes to our assessment and grading practices.

My Season Story grade should be an accurate reflection of where my season ended up. Along the way, I need the descriptive feedback that will enable me to make informed growth-based decisions.

Students need final grades that are accurate reflections of where they end up in the learning process. Along the way, they need appropriate descriptive feedback so they can make informed growth-based decisions, as well.

Traditional grading is rooted in decades of practice, and shifting the course of our institutional inertia to focus more appropriately on learning rather than grading will take effort and time. Schools must choose to embark on Assessment Journeys that lead to accurate feedback and descriptions of learning, mastery of content, and student growth.

Let’s get started today!

Read more…

Grade Like A Torpedo

Today, as I was reading one of my new favorite books, Teach Like A Pirate by Dave Burgess, I came across a metaphor that I'm sure will stick with me.  It's the metaphor of the torpedo.

In his chapter "Ask and Analyze,"  Burgess shares a story he read in Psycho-Cybernetics by Maxwell Maltz.  Maltz says that humans achieve goals similarly to the way a torpedo finds its mark.  "The torpedo accomplishes its goal by going forward, making errors, and continuously correcting them.  By a series of zigzags it literally gropes its way to the goal."

Burgess goes on to add, "The missile is likely to be off target a far greater percentage of the time than it is on target.  Nevertheless, it arrives and hits its target because of the constant adjustments made based on continual analysis of the feedback provided."

Dave Burgess uses this story about the torpedo to suggest that great teaching is the result of constant adjustments based on feedback and results from the classroom.  However, I couldn't help but think about grading when I read this.

Members of the The Assessment FOR Learning Network probably see the immediate connection between this analogy and the principles of Assessment FOR Learning.  Just like a torpedo, the student is often "off target" as the learning process unfolds.  However, the teacher and the student keep making corrections based on continuous feedback.  In the end, the target is reached.  AFL teachers understand that the feedback is given for the purpose of learning FIRST.  Grading is secondary and should reflect the final outcome - not the journey.  

I couldn't help picturing the torpedo in this story hitting a ship captained by an educator who still holds on to the traditional method of grading in which ALL measurements, ALL feedback, and ALL digressions from the correct path are averaged together to come up with a final grade.  In my mind, I see this angry teacher/captain yelling at the submarine something to this effect:

That's not fair!  Your torpedo can't sink my ship!!!  Most of your torpedo's path was off target.  It's unfair to count that as a hit unless your torpedo was on target for the entire path it took!

Of course, the captain is yelling this as his or her ship slowly sinks into the ocean.  The captain doesn't have to like the path the torpedo took.  It really doesn't matter.  In the end, the torpedo found its mark.  The smartest course of action would be to accept reality and abandon ship.  

The same goes for grading.  Who cares if the student hadn't mastered the concept at some random point along the way?  What we really care about is whether or not the student finally gets it.  Everything that happens along the way is feedback for the teacher and the student to use to ensure the ultimate goal is met.

Have you started thinking about next school year yet?  When you do, give some thought to how you might GRADE LIKE A TORPEDO.

Read more…

Blog Topics by Tags

Monthly Archives