I came across this blog post on Twitter recently. I think you'll enjoy it!
Obsessed with Fantasy Football
I have to confess something: I care way too much about Fantasy Football. Throughout the fall, I’m constantly checking my Yahoo Fantasy app, plotting my next waiver wire strategy, or looking online for updates about player injuries. I am addicted to Fantasy Football.
This year I was the champion of my Fantasy Football league. Actually, that’s an understatement. I smashed the competition!
Players in our league can win in 3 categories:
- Regular Season Champ: After 13 weeks, this team has the best win/loss record and qualifies for the playoffs as the top seed.
- Playoff Champ: This team makes the playoffs and then wins the 3 week end-of-season tournament.
- Total Points Champ: This team scores the most points over the course of the 16 week season.
As this year’s regular season, playoff, and total points champ, my team was the undisputed champion of the league.
My goal isn’t to brag about my prowess at Fantasy Football. (Although I have to admit I enjoy doing so…) But for this post to help educators, I first need you to understand the following: My season was the best season of anyone in my league and would be considered a dream season for anybody who plays Fantasy Football.
Then I got an email from Yahoo Fantasy Sports, our league’s Fantasy Football Platform.
A Surprise for the Champ: My Season Story
I love Yahoo’s mobile app, their player updates, and the outstanding data analysis they provide to help players make decisions. So when I received an email from Yahoo with a link to the my “Season Story” I was excited to read their analysis of my successful year.
It turned out that by “Season Story” Yahoo meant it was sharing with me an overall grade for, or assessment of, my season. Imagine my surprise when I learned Yahoo assigned a B- as the grade for my dream season! How could this be?
Much like what happens in the traditional American classroom, Yahoo had used a formula to determine my final grade. The formula averaged together the following 3 key data points:
- Projection and Final Standing: 40% of the Season Grade
This compares where I ended my season with where at the beginning of the season I was projected to finish. Yahoo graded me at an A level, which makes sense. After all, I was the champion in all three of our league’s categories. Plus, I had been projected to finish 14th out of 16 teams. With this combination of overall achievement and growth, if I wasn’t an A in Projection and Final Standing, who could be?
- Weekly Performance: 30% of the Season Grade
Yahoo averaged together each week’s performance to get this score. Yahoo graded me at an A- level. An A- makes sense. I could even agree with a B+. Some weeks my team was amazing. Other weeks it was good. But it was never bad.
- Draft Strategy: 30% of the Season Grade
Yahoo graded me at an F level. In other words, at the beginning of the season, Yahoo didn’t think I had selected a good team. Was Yahoo correct that I picked the wrong players? That might have been a logical prediction early on. Perhaps I didn’t start the season on a strong note. Yahoo already noted that with my Projection and Final Standing category. But the evaluation of my season’s start ended up being the reason my grade was a B- at the end of the season.
Honestly, this grading methodology makes no sense. The purpose of the season grade should be to communicate how successful the season was. With that in mind, the only grade that should have mattered was the summative score of A representing my Projection and Final Standing. That score shows that I achieved at the highest possible level and that I grew beyond expectations. Averaging together the other data points only detracted from the accuracy of what Yahoo was trying to communicate.
Comparing Yahoo and Schools
Similarly, the common and very traditional practice of averaging together different types of student data taken at various points in time throughout a school year detracts from the ability of a student’s final grade to accurately communicate mastery of content.
Let’s compare Yahoo’s grading language to the language we use in schools:
- Projection and Final Standing = Summative Assessment and Student Growth
Where a student ends up when all is said and done is the summative assessment of a student’s level of content mastery, and student growth refers to much they grow from start to finish.
- Weekly Performance = Formative Assessment
All the things students do along the way - the practice that helps them learn, the homework, the classwork, the quizzes, the activities - these are formative assessments. Formative assessment’s purpose is to serve as practice and to provide feedback that helps student both grow and achieve summative mastery.
- Draft Strategy = Pre-Assessment
Where a student is before the learning occurs is the pre-assessment. Pre-assessment data helps us know what formative assessments will be necessary to help individual students grow to guide each of them toward summative assessment mastery.
Lessons from Yahoo for Educators
I believe that by studying Yahoo’s methodology educators will notice the weakness inherent in our own widely-accepted traditional grading and assessment practices. Specifically, we can be reminded that:
- Averaging past digressions with future successes falsifies grades.
Pre-assessment data, or data that represents where a student was early in the learning process, should never be averaged with summative assessment data. The early data is useful to guide students toward growth and mastery, but it should never be held against a student by being part of a grade calculation. Otherwise, we run the risk of having the Draft Strategy dictate the Season Story despite the more accurate picture painted by the Projection and Final Standing.
- Formative assessment is useful for increasing learning but less so for determining a grade.
Knowing my weekly performance enabled me to make decisions to help my team improve, but my team not always performing at an A level does not detract from my team mastering its goals and growing appropriately. If, as a result of formative assessment feedback, a student makes learning decisions that brings her closer to summative mastery, why would we then base the score that represents the summative mastery on the formative feedback?
- Formative assessment data loses value once we have summative data.
Why did Yahoo care about my Draft Strategy and Weekly Performance once it knew my Final Standing? It’s possible that formative assessment data could be used as additional evidence of learning if we are concerned that the summative assessment doesn’t paint a complete picture, but, in general, once mastery is demonstrated, the fact the student wasn’t always at that same level of mastery becomes irrelevant.
- It’s impossible to create the perfect formula to measure all student learning.
Yahoo chose to use a 30/30/40 formula. Why? Some schools say Homework should count 10%. Why? Some districts say exams must count 25% of a grade. Why? Some teachers make formative assessment count 40%. Why? Some schools average semesters, some average quarters, and some average 6 grading periods. Why? There is an inherent problem with averaging. We make up formulas because they sound nice and add up to 100%, but there is no way to definitive formula for determining learning or growth. Averaging points in time, chunks of time, or data taken over time will always mask accuracy. Yet educators, like Yahoo, feel the need to try to find a formula the justify grades.
- Using formulas to determine grades inherently leads to a focus on earning points instead of on learning content.
In the case of Yahoo, they didn’t advertise their formula in advance. Now that I know this formula, I still don’t anticipate changing my strategy in the future because, frankly, I don’t care about my season grade. I care about winning. But students and parents are naturally going to care about grades because of the doors that grades on transcripts open or close. As long as there are final grades there will always an interest in getting good grades. When grades are the result of a formula it naturally leads to a quest for numerator points, something that may not be connected to learning. When this is the case, students ask for opportunities to earn points. When grades are a true reflection of content mastery, a focus on learning is more likely to result. In these situations, students ask for opportunities to demonstrate learning.
A Call to Action
It’s time for schools to stop being like Yahoo Fantasy Sports when it comes to our assessment and grading practices.
My Season Story grade should be an accurate reflection of where my season ended up. Along the way, I need the descriptive feedback that will enable me to make informed growth-based decisions.
Students need final grades that are accurate reflections of where they end up in the learning process. Along the way, they need appropriate descriptive feedback so they can make informed growth-based decisions, as well.
Traditional grading is rooted in decades of practice, and shifting the course of our institutional inertia to focus more appropriately on learning rather than grading will take effort and time. Schools must choose to embark on Assessment Journeys that lead to accurate feedback and descriptions of learning, mastery of content, and student growth.
Let’s get started today!
Which do you care about more - Learning or Grading?
Educators always answer that question with Learning. And if you've spent much time on The Assessment Network, you know that our focus is to help educators use assessment FOR the purpose of learning - rather than to help ecucators figure out new grading systems.
So while our goal is to explore best practices related to assessment so we can increase learning, the reality is that in order to do so we must spend some amount of time examining our grading practices. It's not that grading practices are the focus, but many traditional grading practices have a negative impact on our ability to provide the type of feedback that leads to learning and on our ability to get students to focus on learning - rather than on "earning" a grade.
One traditional grading practice that has such an impact is an overreliance on creating mathematical formulas to determine a student's grade on a particular assignment. Based on our stated priority - Learning - we should instead be developing methods for providing descriptive feedback that helps students learn. Instead, our profession tends to try to develop just the right formula to "calculate a grade," thereby practicing assessment for GRADING rather than assessment for LEARNING.
For example, take a look a the scored rubric below. Pretend this rubric was used in your class. The student had an assignment that covered 4 standards or topics - 1.1, 1.2, 1.3, and 1.4. You've scored the assignment as evidenced by the Xs in the boxes.
Based on this rubric, what letter grade ( A, B, C, D, or F) would you think the student should receive for this assignment?
If you said B, then you answered the same as almost every single educator who has seen this rubric.
When educators are shown this rubric, they tend to think the student should receive a B. After all, in 3 of the 4 standards the student was marked as being in the 2nd best (out of 5) category. Perhaps because in one standard the student was marked in the middle category, the student might receive a B minus, if "shades of B-ness" must be used. But most teachers would use their professional expertise to classify this student as roughly a B student on this assignment.
But, unfortunately, in an attempt to be objective, educators often find the need to "hide" behind mathematical formulas. They choose to let fractions, rather than professional expertise, make grading decisions and choose to provide grade information rather than learning-focused feedback.
Here's what that same rubric might look like when a formula is applied to it:
In this scenario the student would receive a total of 15 points (4+4+3+4) out of a possible 20. This fraction would then be converted to a percentage and the student would receive a 75%. Depending on the school system, this 75% would either be a C or a D.
But when we first analyzed the rubric, our professional expertise and instinct told us this student was in the B range on this assignment. Why then would we allow a mathematical formula to tell us the student should receive a C or a D? Why would we remove our expertise from the decision? More importantly, though, why would we get ourselves caught up in a "grading game"? Why would we employ practices that lead to students arguing about a grade or scrambling to earn more points when, instead, we could employ practices that provided feedback useful for learning?
Here's another way to use that same rubric:
By using this rubric, we prevent ourselves from getting caught up in a numbers game. We're not arguing between 75 or 76 or 77. It's very easy to see that, by and large, this student should be rated in the B range. We don't need 100 different points of rating to determine that this student falls into the B range - and frankly, does it really matter where in the B range the student falls? Because we're most interested in learning, right? Therefore, we don't really care about the B or the 75 or whatever the grade is. We care about providing feedback that will help a student learn, correct?
A numerical score of 75 leads to 1 of 2 things. It leads either to:
- A debate about the grading system, or
- A request by the student to earn more points
But if we provide feedback in the form of a letter grade that is not necessarily the result of a mathematical formula, we have the potential to get students to ask questions about how they can improve their learning, especially if the letter grade feedback is attached to descriptive feedback.
What if you used a descriptive chart like the one below that was created by Math teachers at Salem High School in Salem, Virginia?
A chart like this one attaches a descriptive meaning to the letter grades. The B no longer means that the student received 80-89% or 87-93% of the possible points. Instead, we now know that:
- In 3 of the 4 standards the student has a strong understanding but a fair number of mistakes are still being made;
- To improve to the A level in these standards, the student needs to check his/her work and strive to reach a point of complete understanding as evidenced by little to no mistakes and the ability to lead someone else; and
- In 1 of the standards assessed the student shows a basic understanding of the concepts but needs a lot more practice as evidenced by his/her ability to start but then the tendency to get stuck..
The descriptions in the chart above might not be the perfect ones for your class or your grade or your school, but they are examples of feedback that is much more learning-focused than typical fraction-based grading practices. If our goal was just sorting and selecting students, then perhaps a focus on an assessment OF learning based on fractions would suffice. But we are in the business of unlocking human potential to help all students learn and grow. Therefore, we need to focus on assessment FOR learning and descriptive feedback.
Please don't fall into the trap of thinking a mathematical formula is more objective than your expertise. You know much more about learning and about your students and about their growth than a formula does. Use your expertise to provide descriptive feedback. Tell your students where they are and what they need to do - not so they can earn enough numerator points to raise their grade but so they can master the important content and skills you teach.
- Typically teachers will tell students that they can take their test home and do corrections on their own. Some will. Some won't. Some will do it just to get back points but won't actually learn the content better. Some might even cheat to get the right answers. Mrs. Shannon made sure that this assessment was a learning tool by having the corrections be a classroom activity guided by teachers.
- Mrs. Shannon clearly used assessment-elicited evidence to design her lesson. It was from the test results the day before that she was able to group her students so that they would receive the help and instruction that they need in order to learn.
- The entire activity occurred because Mrs. Shannon realized from the test that the students as a whole had not mastered the content. This test gave her the feedback she needed to know that if her goal was to increase learning she was going to need to find a way to reteach some of the material. The beauty of this activity was that it then allowed her to reteach to each student only what he or she needed.
- The idea of earning back points was not the major focus of this activity. The major focus was learning the material. In fact, because Mrs. Shannon made this a class activity I would bet that the outcome would have been almost identical if students hadn't been able to get points. In other words, this was about learning. The test the day before was used by Mrs. Shannon NOT as a way to determine the students' grades but rather as a way to determine their learning so that she could adjust her instruction with the ultimate goal of having her students learn.
Recently, our friends at JumpRope asked Pawel Nazarewicz and me to share about our personal experiences with Standards Based Learning here at Salem High School. We put some thoughts together which they turned into the blog post linked to this page.
We hope members of this network will find it helpful, and we'd love to hear from you about your experiences, struggles, and successes with SBL.
Here's the link to the post: https://www.jumpro.pe/blog/standards-based-teaching-and-learning-after-year-one/
The Assessment Network has grown to the point where that it now contains many different examples of how the power of assessment can be maximized in the classroom. These ideas are scattered throughout the site. To make this site easier to navigate, this one blog will include links to all of the other classroom AFL examples. It's sort of like an AFL Wal-Mart - everything you need in one blog!
- A school counselor uses AFL (AFL chart for students to track their progress)
- Examples of using rubrics (rubric for students to assess their own learning)
- Don't confuse it for a specific strategy (various AFL strategies and grading practices)
- An everyday activity can become an AFL tool (review sheets that allow students to track level of mastery)
- 3 Perspectives on an AFL example (test grades replacing quiz grades)
- Simple AFL activity in Math (getting feedback prior to a graded activity)
- An AFL email to parents and students (communicating AFL to stakeholders)
- A Salem High School teachers uses a GPS (retaking quizzes to reach mastery)
- An Assessment Becomes a Learning Tool (enhancing the impact of test corrections)
- AFL Communication and a Self-Assessment Rubric for Math (a rubric for students to use self-assess progress)
- The Power of Asking "Can You" - an example of training students to assess their own progress
- Would This Work? (A Question for Math Teachers) - daily quizzes on specific steps to Math processes
- How AFL Shouldn't and Should Look in a Math Class - more than just teach, test, and retest
- Teaching More than the Notes and Rhythm (by Mark Przybylowski)
- How AFL could be applied to a PE class (students charting their own progress)
- AFL and Heart Rate Monitors
Trades and Industrial
- Laying an AFL Bead in Welding (the importance of feedback)
- An AFL review strategy that can be used by any teacher in any content area (white boards for review)
- Students checking their progress (student progress check sheet)
- LOOPING: Where AFL and SBL all comes together
- 10 Formative Assessment Tech Tools by Edutechchick
- 56 Examples of Formative Assessment by David Wees
- The Multi-Colored Note Card
- Practical ideas for more frequent assessment to enhance learning (4 general examples)
- It's about students taking ownership of learning (general examples of how students can assess themselves)
- Getting and Giving Student Feedback (exit slips and more)
- AFL principles can guide many different types of classroom practices (students calculating their grades)
- AFL strategies and descriptions (7 different AFL strategies and descriptions of how to use them)
- School administrators try creating classroom AFL objectives (sample AFL objectives)
- AFL Flashcard Review
- An Elementary Activity that Applies to All Grade Levels (daily review sheet)
- The Pre-Test
- AFL Communication and a Self-Assessment Rubric for Math - an example of communicating the purpose of AFL practices to students and parents
- The Power of Asking "Can You" - an example of training students to assess their own progress
- Students checking their progress (student progress check sheet)
- An AFL review strategy that can be used by any teacher in any content area (white boards for review)
- A Salem High School teachers uses a GPS (retaking quizzes to reach mastery)
- An Assessment Becomes a Learning Tool (enhancing the impact of test corrections)
- An everyday activity can become an AFL tool (review sheets that allow students to track level of mastery)
- AFL, Art Class, and Failure Management - learning from trying
- Ideas for Making Retakes and Redos Work
- Focused Formatives
- Student Self Assessment
- The AFL/SBL Exit Slip
- 5 Fantastic, Fast Formative Assessment Tools from Vicki Davis
- Making Every Assessment a Formative Assessment
The January 2013 edition of the Association for Pyschological Science's journal has a great article about what impacts students' learning. What they found is that certain techniques and strategies have a positive impact on students learning content (and should be continued) and that certain techniques and strategies have little to no impact on student learning (and should be stopped).
The strategies with little impact include summarizing content, highlighting, and rereading material and notes.
The strategies that had a positive impact were ones that fell into the category of taking practice tests.
Some may find those results surprising. Strong AFL teachers shouldn't be surprised at all. AFL teachers know that Memory is the Residue of Thought. AFL teachers know that students and teachers need the feedback that comes from regular practice. Teachers who, for example, have used regular Quia quizzes to prepare students and to gain benchmark data know and attest to the value of taking practice tests.
What was disturbing, though, was the finding that having students prepare/study by rereading notes and by using the highlighter method was more common than having students take regular practice assessments.
It's time to put away the highlighters and break out the practice tests!
Here is a link to the APS study: http://www.psychologicalscience.org/index.php/publications/journals/pspi/learning-techniques.html
RECOMMENDED READING: Here is a link to a post about the study found on the AJC Get Schooled blog: http://blogs.ajc.com/get-schooled-blog/2013/01/11/how-to-study-stop-highlighting-stop-cramming-stop-rereading-notes-start-taking-practice-tests-and-using-flash-cards/
Thank you to Catherine, the EDUTECHCHICK, for the following blog and Daisy Dyer Duerr for sharing it on Twitter. Here are 10 great tools to add to your AFL toolkit.
Last spring during our division's professional development day I attended a presentation led by Curtis Hicks and Mark Ingerson. Their presentation was based on the book Why Students Don't Like School by author and cognitive scientist, Daniel Willingham. There was one statement in particular they shared from his book that really stuck with me. In his book, Daniel Willingham says MEMORY IS THE RESIDUE OF THOUGHT.
Think about that statement for a moment - MEMORY IS THE RESIDUE OF THOUGHT. All teachers are trying to get students to remember content. If Willingham is correct, then we must first get students to THINK about content. There can be no residue of thought if there isn't first thinking.
Reflect on your own classroom and teaching practices. Is the truth behind this statement evident in your classroom? I would contend that it's worth asking yourself the following question: "Am I doing enough to give students opportunities to THINK about my content?"
If it's true that MEMORY IS THE RESIDUE OF THOUGHT then the following statements are probably true as well:
- The more one thinks on something, the more "residue" that is left.
- More residue leads to greater memory of content.
- Greater memory of content leads to an increase in learning.
As you're thinking about your classroom and how much opportunity for thought your students have, I think it's worth noting an important distinction. There is a huge difference between LISTENING to content and THINKING about content.
Students often listen to content and listen to information and we fool ourselves into believing they've been thinking about it just because they heard us. However, we all know that there have been many times when we have been listening to or hearing a speaker while our thoughts were a million miles away. Or maybe we have a few students who are engaged in a meaningful class discussion about the content, which also then fools us into thinking that our class as a whole was really thinking about the content.
If we want students to actually THINK about the content, then we need to structure activities IN class that require them to engage with the content, to form opinions, to use facts, and to apply. We have to create opportunities to really think. This concept applies to ALL levels of students. Just because your students are IB or AP students who know how to sit and listen politely doesn't mean that they are thinking about your content.
This is where AFL comes in. Strategies that are based on the philosophy of AFL are strategies that lead to students thinking about content and assessing their own understanding. AFL strategies inherently lead to students THINKING about content.
As you head back to school from your Christmas/Winter break, consider what you can do this year to ensure that your students are actually thinking about content and building the residue that will lead to memory. For AFL strategies and ideas that will help you accomplish this goal, check out https://salemafl.ning.com/profiles/blogs/practical-examples-of-afl-to right here on Assessment FOR Learning.
One of the great hurdles to moving toward a Standards Based approach to learning, teaching, grading, and communicating is the fact that our students have been conditioned to operate in a points based system. They have been raised in a system that focuses more on earning points for grades than on standards based feedback focused on learning.
Educators and schools making the shift to SBL philosophies often develop strategies and plans for communicating SBL principles and practices to parents. The thinking is that parents have been conditioned by the same system that has trained their children and that parents will be upset if their children are faced with a new constructs, lingo, and grading practices. While communicating clearly with parents is important, focusing first on how to win over parents overlooks the most powerful communication ally teachers possess - students.
If students understand the goals of SBL and how it will benefit their learning, then they become powerful advocates for meaningful assessment strategies. Students are the buffer between school and home. We should never underestimate the importance of making sure that they understand the value of what we're doing with them in the classroom. If they can articulate a concept appropriately, then their parents are more likely to hear about practices such as SBL in a positive manner - even if we have never directly communicated with them about those practices.
Beth Denton, a wonderful Math teacher at Salem High School, recently shared the following email with me. The first paragraph is Beth's explanation to me. The second paragraph is from the student to Beth. Notice that Beth recognizes that the student is still too focused on the grade. However, the outcome, even if influenced by a desire for a higher grade, is one that leads to a student taking ownership of learning as a result of Beth's standard based feedback.
Here is the email I received from Beth:
An email from a concerned student. While it's still grade focused, I see hints that we're moving in the right direction. This student knows what she needs to improve on and is looking for MASTERY of the topics! Yay!Hi Mrs. Denton! I haven't gotten a chance to have a conversation with you about this so I thought I would send you and e-mail and come in some time next week after school to start working. My current grade in this class is an 87 and my goal for this semester is to have an A. I realize that since the points are different, my best bet is to make up some of the previous "1.0's" that I got and of course, continue to ace tests and any graded assignments. According to JumpRope, the skills that I personally need improvement on include: Determining whether figures have been rotated, dilated, or reflected, Parallel Lines cut by a transversal, and finally, The unit 5 congruent triangles portion of the test. I would like to improve my mastery on these skills not only to get my grade up, but to do well on these topics during the SOL. If you are available I plan to be staying after school as much as possible next week. So sorry for such a long e-mail haha! Thank you for understanding.
Are you training your students to think in terms of SBL? Are your students still coming to you chasing points, or are they, as Beth's student exemplifies, able to communicate their specific areas of strength and need based on your standards based feedback? (By the way, JumpRope, the standards based grading system referenced by the teacher, is a phenomenal tool for standards based commmuncation.)
If this student's parent were to ask her to explain how Mrs. Denton grades, I have no doubt that the students would be able to do so in a way that would cause the parent to appreciate Mrs. Denton's instructional practices. More importantly, this student would be able to communicate that Mrs. Denton is grading and assessing in a manner that enables her to learn.
As educators we definitely care more about Learning than we care about Grading. So it tends to frustrate us when our students seem to only care about getting a Grade.
Do you ever wish you could redirect your students' focus to learning? While it's not easy to do so, it's also not impossible. Since most students will not unilaterally change their focus, we have to make sure that:
- Everything we do reinforces the fact that we value Learning over Grading, and that
- Nothing we do encourages students to focus on Grades.
Those 2 ideas might sound overly simplified, but the ramifications are immense. If we honestly analyze traditional assessment practices, we'll start to find that much of what we do puts a focus on getting a grade. Even the relatively "enlightened" practice of allowing retakes can end up causing kids to focus on trying to raise their grades rather than learn content. (For more on the subject of retakes, read this previous post.)
But when a teacher gives students regular feedback that is focused on learning - rather than on grades - it is possible to train students to think, communicate, and focus in a learning-centered manner. Below is an email that one of our teachers sent me recently. In it she recounts a conversation with a student who exemplified a focus on learning. I hope as you read it you can imagine the satisfaction this teacher felt (as opposed to the typical frustration we feel when students just care about grades).
So I've been talking about mastery and areas of weakness more this year with my students. I'm trying to communicate it better, and I have done different exercises with them to help them diagnose their weaknesses.
Anyways, cool moment today - I had a girl who came to me on her own willingly and took out one of the papers I gave her last week on which she diagnosed her weakness during a station review.
She said, "Can I go in the hallway and work on my weakness?"
I said, "Well, I haven't handed back the mastery sheet yet from your test today, but of course you can. Do you know what your weak standards are?"
She responded with, "Yes I do, I have the paper we used last week where we did stations, and I was able to pick out what I need to work on."
Keep in mind, this is a student who is more of an typical or middle-of-the-road student, not necessarily one who would be seen as an overachiever. In other words, my talk of "mastery and weakness" is working! :)
Awesome! How fun it was to read this email and to celebrate with a teacher who is helping students value learning!
(For more information on how this specific teacher helps students identify areas of weakness, read this previous post.)
The article below appeared in the York Daily Record from York, PA on July 10, 2015. It was written by Angie Mason who can be reached via email at firstname.lastname@example.org or on Twitter at @angiemason1. I don't know if I've ever read a better real world case study on assessment-related issues than this article.
Please don't, after reading this, leave focusing on a fear of getting sued by parents. Instead, as you read it, look at all the different assessment related topics embedded within it. Notice class rank, GPA, zeros, grades impacted by behavior, communication between colleagues, school policies, extra credit, absences, full credit v. partial credit, make-up work, exams, detention, scholarships, college acceptance, etc. These are all topics we deal with regularly in schools, and they're all a part of this story.
My encouragement is for educators to read this article and then reflect on how it might have looked if this school - or at least the educators involved in this story - adhered to the principles of AFL. Specifically, how could adhering to the following concepts have altered the story:
- Assessment is primarily a feedback tool for students to guide their learning and for teachers to guide their instruction.
- The goal of teaching is learning, not grading.
- A grade should communicate a student's level of mastery of specific standards or learning objectives.
How would this story have played out in your school? In your classroom? How would the principles of Assessment FOR Learning have impacted this story?
Feel free to leave your comments!
At our 1/10/18 faculty meeting, teachers were asked to bring a recent and typical lesson plan with them. Meeting in groups of 3 or 4, teachers shared the details of the lesson plans with each other.
Then a few thoughts were shared with the entire group about the relationship between Assessment and Pedagogy. Sometimes we think of assessment as what happens after the pedagogy occurs. The faculty was encouraged to think of assessment as part of the pedagogy itself.
Keeping in mind that assessment is anything that results in getting and/or giving meaningful feedback, no lesson can be at its best if it doesn't include some type of assessment activity. Learning requires the getting and/or giving of feedback.
Teachers then had a conversation in their small groups about how best to weave assessment into the lesson plan they brought with them.
Hopefully, practical conversations like this lead to productive collaboration and an increased use of meaningful assessment. Maybe an activity like this would benefit your faculty?
As readers of blogs on this site know, I love the philosophy of Assessment FOR Learning. However, a philosophy is only as valuable as the results it produces. I'd like to share with you some results of AFL's impact on teaching and learning at the school where it is my privilege to serve as an Assistant Principal - Salem High School in Salem, VA.
From school year 1999-2000 to school year 2007-2008 (the school year BEFORE SHS began making AFL its professional development focus), Salem High School averaged 89.6 retentions per school year. This means that 89.6 students - which on a typical year would be about 7% of our student body - failed to move on to the next grade level.
AFL's focus is not about getting students to pass. It's about getting students to learn and then making sure that grades accurately reflect that learning. Obviously, though, passing classes would be a byproduct of such a focus.
Since the 2008-2009 school year, when Salem High School's teachers began adopting AFL strategies and exploring how to use assessment to increase learning, SHS has averaged 44 retentions per year. That is slightly fewer than half the number of retentions that we averaged during the 9 previous school years. On a typical year, 44 retentions would be about 3.5% of our student body.
During that same period of time our graduation rate has increased, our state test scores have continued to improve or stay at a very high pass rate, our percentage of students taking dual-enrolled and advanced courses has remained incredibly high, and our SAT scores have remained at or above the national average. Students at Salem High School are not passing because they are being passed along. They are passing because they are learning. And they are learning because the wonderful faculty of SHS is taking very seriously its efforts to use assessment as a learning tool.
AFL works, and I look forward to seeing our data get even better as our teachers become even more proficient at incorporating AFL strategies into their everyday lessons.
Instructional practices based on the philosophy of Assessment FOR Learning (AFL) just make sense. End of story. To not practice AFL is to ignore how people learn. There really isn't room for debate as to whether or not one should practice AFL. Such a debate would be more appropriately titled "Should Teachers Care About Whether Or Not Students Learn: Yes or No."
I was reminded recently of AFL's centrality to learning when I met with Erik Largen for his summative evaluation. Erik is a special education teacher at Salem High School, and It is my privilege to see him work with his students on a daily basis. Erik teaches in our school's multiple handicapped classroom, and his students are amazing! (I will admit my personal bias - they are my favorite group of students in the school.)
Many of Mr Largen's students have great cognitive and physical needs. Mr. Largen cares about them as though they were his own children. He believes they can learn, and he believes he can make a difference in their lives. Therefore, just like all great teachers do, Erik works tirelessly to enable them to learn, grow, and make progress.
In our summative conference the other day, Erik showed me many examples of tools - all based in AFL philosophy - he uses in his classroom. These tools were what reminded me of an important truth: AFL isn't just one way to teach; AFL is teaching. Before we go any further, take a look at the following tool Mr. Largen uses. It's a task analysis form to help a student learn take off his coat:
Mr. Largen has a student who needs to learn how to independently take off his winter coat and properly hang it up on a hook. If Mr Largen just tells him to do this or even shows him how to do it, the student will not be able to learn the task. Therefore, Erik has to break down the task into standards.
I bet most of us have never thought about the 14 different sub-standards required to master before one can independently take off and hang up a coat. That's what the teacher is there for, though. It is the teacher who is the content specialist and who knows each component that must be taught if students are to reach mastery. This is true for learning to use an Algebraic formula, to lay a bead in Welding, or to write a paper in English class every bit as much as it is true when learning to take off a coat.
After identifying the standards that must be mastered, Mr. Largen assesses his student on those standards. This allows him to know where he must focus his instruction. Without first assessing his student, how does he know what to teach? For example, this student can already independently walk to the hook, therefore, that standard requires little focus. However, this student cannot undo the zipper without a teacher's hands guiding his. The elements of undoing the zipper will require a lot of focus.
Why did Mr. Largen assess this student? To gather the feedback he needs to help the student learn. That's AFL.
After teaching and practicing the skills, Mr. Largen reassesses the same standards. This allows Mr. Largen to document progress, to give the student feedback and praise where appropriate, and to know how to continue to plan instruction. The purpose of the assessment is once again to help learning. Based on the feedback from the assessment, Mr. Largen will know exactly what additional instruction and practice is necessary. Because the assessment is standards-based, he will know which specific standards still require the most focus.
This process of assessing based on standards, reteaching as needed, and then reassessing seems quite basic and obvious in this situation. But it's no different than any other situation. While a task analysis sheet may or may not have been involved, this is how you learned to ride a bike, drive a car, throw a ball, play a video game, or play an instrument.
AFL makes sense. To expect someone to learn any other way - unless we believe students should learn on their own - really does not make sense. Unfortunately, when seen through the lens of AFL, many traditional educational practices no longer make sense:
- Would it make sense for Mr. Largen to show this student how to take off his coat, assess how well he did it, give him a grade, and move on? Of course not, at least not if Mr. Largen cared more about learning than grading.
- Would it make sense for Mr. Largen to grade each assessment of progress and then average them together? Of course not - again, not if Mr. Largen cared more about learning than grading.
- Would it make sense for Mr. Largen to focus on each standard the same amount regardless of what the student could already do? Of course not.
- Would it make sense for Mr. Largen to just teach the subject without figuring how what his student knew? Would it make sense to wait until the "end of the unit" before finding out how well the student learned? Would it make sense to say "It's my job to teach it, but it's your job to learn it?" Of course not. Of course not. Of course not.
I met a teacher once who told me that she thought AFL sounded good philosophically but that it didn't work practically. Really? Which part is impractical?
- Is assessing students to find out what they know impractical?
- Is it impractical to assess students based on standards so you'll know specifically where they need to grow?
- Is it impractical to reassess throughout the learning process to see how students are progressing?
- Does giving meaningful and useful feedback to students not sound practical?
- Does using feedback to guide instruction not apply to certain subjects?
- Is it ever more important to focus on coming up with numbers to average together than it is to guide learning?
Because I firmly believe that teachers want students to learn, the best I can figure is this: If someone disagrees with the use of AFL-based strategies they must not understand what AFL is.
We all learn from feedback, and that feedback needs to be based on assessments that measure the standards required for mastery. Thanks, Erik, for the great reminder of how central the philosophy of AFL is to the learning process.
When our school first starting investigating Assessment FOR Learning 4 years ago, the first teacher we had address our faculty with an AFL classroom example was Bert Weschke, our Welding teacher. Recently, as I have engaged in some conversations about applying AFL practices to the classroom - or more specifically, NOT applying those strategies - I have come back in my mind to Bert's example. There's a lot to learn about AFL from the way Bert Weschke teaches students to "lay a bead".
A weld bead is a deposit of metal that results from a passing of the welding torch over metal. Bert shared that when teaching students to lay a bead, he has them practice numerous times on a piece of metal. As they are practicing, he is moving around the room providing them with feedback. He has already taught/lectured on how to lay a bead. Now, as he moves about the room, his students get plenty of practice and receive plenty of feedback.
Eventually, the student will have to submit a bead that receives a summative grade. Until that point, though, each student will repeat the process over and over with the goal of mastery in mind. The feedback the student receives might come in the form of a grade - such as "If this was the final product you'd get a C." - but it isn't going to impact the grade.
This seems to me to be the common sense way to teach Welding. Imagine a Welding teacher lecturing and demonstrating how to make a bead, telling the students to study his notes on bead laying that night, and then taking his students into the shop the next day for a hands-on test before moving on to the next topic. It just wouldn't make sense - unless, of course, mastery of the skill was not the goal.
So why does it make sense to teach this way in a Science class or a History class or any other classroom? It doesn't.
If students are going to master content THEY MUST BE GIVEN OPPORTUNITIES TO PRACTICE THE CONTENT AND THEY MUST RECEIVE FEEDBACK FROM THE TEACHER. Grading the student really should be secondary. The feedback could look like a grade - "If the final test were today you'd have a C." - but it really shouldn't be what determines the grade.
It's true that some students can listen to a lecture or read notes and then do well on a test, but:
1. Not all can,
2. This doesn't ensure long-term learning, and
3. This makes the teacher irrelevant.
No matter the level of the student or the level of the course, teachers MUST provide opportunities for practice and they must give regular feedback along the way. That feedback could be entered into a grade book; it could be a score on a unique feedback scale (such as a check or check+); it could be descriptive and in paragraph format, or it could be a simple statement such as "Keep working on _____."
How much feedback is too much? If you're following kids home in the afternoon to give them feedback instead of being with your family, then you probably need to stop. Until then, keep giving feedback.
As I think about Bert's example of teaching Welding, I'm reminding of several History professors I had in college. By lecturing and giving notes without any feedback or assessment prior to the quiz, large test, or exam, essentially these professors ended up assessing whether or not:
1. I had strong listening skills,
2. I could memorize notes, and/or
3. I could teach myself.
What they weren't assessing was how well THEY TAUGHT me the content.
Let's not be like those professors. Instead, let's be like a good Welding instructor. Let's make sure that students have many opportunities to practice and receive FEEDBACK. Let's make sure TEACHERS lead students to mastery.
Assessment in on-line classes presents significant challenges for both students and teachers, especially for teachers like me who give a lot of importance to evidence gathered throughout the course by performance tasks.
The purpose of framing assessment around performance tasks is to clearly distinguish between those who really understand from those who only seem to because through performance understanding becomes "visible". This is the reason that assessments are frequently designed as projects, which are essentially complex, “messy,” and multi-staged problems to be solved. These critical-thinking elements help teachers see levels of comprehension displayed by students. Tasks with these characteristics also go beyond furnishing a snapshot of student understanding to providing "scrapbook" of understanding - in other words a collection of evidence gathered over time, instead of through a single event. This is crucial because "understanding develops as a result of ongoing inquiry and rethinking". (Wiggins pg. 152)
However, this way of framing assessment still goes against many assumptions our students have about learning and thus about grading as they are often considered equivalent. I have spoken with my students at length about this to understand their perspectives, and they offer a variety of interesting ideas that can be summed up in the following two phrases. Whatever is given a grade by the teacher is important, and anything else can be skipped. Further, grades are derived from quizzes and tests.
Several problems arise from these opposing perspectives to learning that need to be looked at carefully. Among them is how forums are approached. Forums provide opportunities for students to put concepts found in the readings in their own terms and bounce ideas off their fellow students. Groups collaboratively plan a product or performance by facing contextualized issues. These exercises give students feedback and practice at doing the task, both valuable for the summative assessment that will come later in the course.
Fellow teacher and blogger Lisa Lane is particularly concerned about the second point because like me, she wants students to extend their understanding of the topic at hand through discussion in forums.
"In terms of course design, I don’t consider the discussion 20% of the course, just 20% of the grade. It’s more like half the class, because it’s the processing and sharing of the knowledge learned via presentation and reading. It’s the heart, not a side activity. It’s lower stakes (not 50% of the grade) because I want the students to feel free to explore." (Lane, 2009)
This seems simple enough, but my experience corroborates Lisa's - the students just don't get it. The message that students receive is that discussions held in forums are 20% of the class and deserve that much of their energy devoted to the course.
I have found a way to begin to resolve this problem. From the beginning of my courses I make it clear that grades will be based on summative assessment only which will take place at or near the end of the course. All other activities are formative and for that reason are not graded. To avoid misunderstandings regarding the importance of non-graded formative activities, I give a mark to each activity, a number according to its relative value. I keep these on a Google spreadsheet permanently linked to the course so it is always up to date and visible to students. The Google spreadsheet is a link so I never have to upload new versions or save them under new names or send the document out to students because they can see updates made to the document in real time or any time they check into the course.
This has effected a change in student's attitude towards formative activities because students can’t stand to receive a low number, even if it doesn’t count towards the grade. I have told them that because activities are formative, they can be improved by going over my qualitative feedback and the rubrics. This of course means being flexible with due dates and very patient with problems students and groups have in submitting assignments on time. It has motivated them to interact more with me, with classmates and with the rubrics and it has focused their attention, even if it is inadvertently, on the learning process - writing, editing, consulting, re-writing, re-editing, consulting again - and less on the grade itself.
Also, if a discussion is designed to last two weeks and it is worth six points (marks), I assign three to the first week and three to the second week. This gets students to participate more constantly and not just at the end of the designated period for that discussion.
Students can compare the number of marks they have to the total possible number at any given moment which serves as an alert for students who fall behind. At the end of the course, they are awarded a Professional Development score, which is simply the sum total of their marks. This indicates effort given towards the activities in the course and their level of mastery of the key course concepts. In nearly every case high marks coincide with high grades and low marks with low grades. Although this score is not part of the grade, students take it as seriously as the grades.
Although it may be counterintuitive to use numbers (marks) to encourage students to practice essential skills, it seems to be a language symbol that communicates a message far clearer than many of my attempts to explain and motivate.
---- References ----
Lane, Lisa. Ramblings on Assessments that work and assumptions that don't. Blog post, 2009. http://lisahistory.net/wordpress/ ?p=392
Wiggins, Grant and McTighe, Jay. Understanding by Design. Association for Supervision and Curriculum Development, 2005. pg. 152.
Article originally published in Online Classroom, August 2010.
As has been stated on this site before, traditional grading practices have led to a culture of "Quest for Numerator Points" in our schools and with our students. Students have been trained and conditioned to care more about grading than learning.
We educators wish this wasn't the case, yet to complain about it makes as much sense as Sea World trainers complaining that Shamu only does tricks when rewarded with fish (more on that concept here). In other words, while we don't like the fact that students routinely ask teachers for opportunities to earn more points, the reality is that they are doing what they've been conditioned by us to do.
When we measure student progress in terms of points earned over points possible it just makes sense that students want more points. The solution to this situation is to have a way to communicate how students are doing other than an average of all work completed. The solution is to communicate student progress toward the mastery of specific standards. The solution is Standards Based Learning.
Consider this. A student asks how she is doing in your class. You respond, "You're doing pretty well. You currently have a 92." What is the student supposed to do if she wants to improve? The only logical response is for her to consider how many points she needs to get on the next assignment to raise her average or to think of ways to earn additional or extra points. Either way, the focus is on earning points instead of on increasing learning.
However, if a student asks how she is doing in your class and you are able to respond by telling her how she is progressing toward specific standards, then you have the potential for students to ask more productive questions that focus on learning. Want to see what I mean? Here is a recent question a student at our school emailed to her teacher:
I am having trouble with the Multiplication and Division of the radicals, is there anyway I can receive more help on trying to understand them? It's my only struggle.
Wow! Isn't that exactly the sort of question we want students to ask? This student understands what she knows and doesn't know because of the specific standards-based feedback she has received from her teacher. This student has been trained to seek to master standards instead of just collect points. The student is seeking out her teacher in a positive, productive, and meaningful manner.
Let's face it: Students will always care about grades. We did when we were students, and it's completely understandable. But if we want them to value learning, we - the educators - need to provide them with a new paradigm. This is the beauty of Standards Based Learning.
Note: this page contains paid content.
Please, subscribe to get an access.