All Posts (134)

Sort by

Fantasy Football and the Problem with Averaging

Obsessed with Fantasy Football

I have to confess something: I care way too much about Fantasy Football. Throughout the fall, I’m constantly checking my Yahoo Fantasy app, plotting my next waiver wire strategy, or looking online for updates about player injuries. I am addicted to Fantasy Football.

This year I was the champion of my Fantasy Football league. Actually, that’s an understatement. I smashed the competition!

Players in our league can win in 3 categories:

  1. Regular Season Champ: After 13 weeks, this team has the best win/loss record and qualifies for the playoffs as the top seed.
  2. Playoff Champ: This team makes the playoffs and then wins the 3 week end-of-season tournament.
  3. Total Points Champ: This team scores the most points over the course of the 16 week season.

As this year’s regular season, playoff, and total points champ, my team was the undisputed champion of the league.

My goal isn’t to brag about my prowess at Fantasy Football. (Although I have to admit I enjoy doing so…) But for this post to help educators, I first need you to understand the following: My season was the best season of anyone in my league and would be considered a dream season for anybody who plays Fantasy Football.

Then I got an email from Yahoo Fantasy Sports, our league’s Fantasy Football Platform.

A Surprise for the Champ: My Season Story

I love Yahoo’s mobile app, their player updates, and the outstanding data analysis they provide to help players make decisions. So when I received an email from Yahoo with a link to the my “Season Story” I was excited to read their analysis of my successful year.

It turned out that by “Season Story” Yahoo meant it was sharing with me an overall grade for, or assessment of, my season. Imagine my surprise when I learned Yahoo assigned a B- as the grade for my dream season! How could this be?

Grading: Yahoo-style

Much like what happens in the traditional American classroom, Yahoo had used a formula to determine my final grade. The formula averaged together the following 3 key data points:

  1. Projection and Final Standing: 40% of the Season Grade
    This compares where I ended my season with where at the beginning of the season I was projected to finish. Yahoo graded me at an A level, which makes sense. After all, I was the champion in all three of our league’s categories. Plus, I had been projected to finish 14th out of 16 teams. With this combination of overall achievement and growth, if I wasn’t an A in Projection and Final Standing, who could be?
  2. Weekly Performance: 30% of the Season Grade
    Yahoo averaged together each week’s performance to get this score. Yahoo graded me at an A- level. An A- makes sense. I could even agree with a B+. Some weeks my team was amazing. Other weeks it was good. But it was never bad.
  3. Draft Strategy: 30% of the Season Grade
    Yahoo graded me at an F level. In other words, at the beginning of the season, Yahoo didn’t think I had selected a good team. Was Yahoo correct that I picked the wrong players? That might have been a logical prediction early on. Perhaps I didn’t start the season on a strong note. Yahoo already noted that with my Projection and Final Standing category. But the evaluation of my season’s start ended up being the reason my grade was a B- at the end of the season.

11148394462?profile=original

Honestly, this grading methodology makes no sense. The purpose of the season grade should be to communicate how successful the season was. With that in mind, the only grade that should have mattered was the summative score of A representing my Projection and Final Standing. That score shows that I achieved at the highest possible level and that I grew beyond expectations. Averaging together the other data points only detracted from the accuracy of what Yahoo was trying to communicate.


Comparing Yahoo and Schools

Similarly, the common and very traditional practice of averaging together different types of student data taken at various points in time throughout a school year detracts from the ability of a student’s final grade to accurately communicate mastery of content.

Let’s compare Yahoo’s grading language to the language we use in schools:

  1. Projection and Final Standing = Summative Assessment and Student Growth
    Where a student ends up when all is said and done is the summative assessment of a student’s level of content mastery, and student growth refers to much they grow from start to finish.
  2. Weekly Performance = Formative Assessment
    All the things students do along the way - the practice that helps them learn, the homework, the classwork, the quizzes, the activities - these are formative assessments. Formative assessment’s purpose is to serve as practice and to provide feedback that helps student both grow and achieve summative mastery.
  3. Draft Strategy = Pre-Assessment
    Where a student is before the learning occurs is the pre-assessment. Pre-assessment data helps us know what formative assessments will be necessary to help individual students grow to guide each of them toward summative assessment mastery.

Lessons from Yahoo for Educators

I believe that by studying Yahoo’s methodology educators will notice the weakness inherent in our own widely-accepted traditional grading and assessment practices. Specifically, we can be reminded that:

  • Averaging past digressions with future successes falsifies grades.
    Pre-assessment data, or data that represents where a student was early in the learning process, should never be averaged with summative assessment data. The early data is useful to guide students toward growth and mastery, but it should never be held against a student by being part of a grade calculation. Otherwise, we run the risk of having the Draft Strategy dictate the Season Story despite the more accurate picture painted by the Projection and Final Standing.
  • Formative assessment is useful for increasing learning but less so for determining a grade.
    Knowing my weekly performance enabled me to make decisions to help my team improve, but my team not always performing at an A level does not detract from my team mastering its goals and growing appropriately. If, as a result of formative assessment feedback, a student makes learning decisions that brings her closer to summative mastery, why would we then base the score that represents the summative mastery on the formative feedback?
  • Formative assessment data loses value once we have summative data.
    Why did Yahoo care about my Draft Strategy and Weekly Performance once it knew my Final Standing? It’s possible that formative assessment data could be used as additional evidence of learning if we are concerned that the summative assessment doesn’t paint a complete picture, but, in general, once mastery is demonstrated, the fact the student wasn’t always at that same level of mastery becomes irrelevant.
  • It’s impossible to create the perfect formula to measure all student learning.
    Yahoo chose to use a 30/30/40 formula. Why? Some schools say Homework should count 10%. Why? Some districts say exams must count 25% of a grade. Why? Some teachers make formative assessment count 40%. Why? Some schools average semesters, some average quarters, and some average 6 grading periods. Why? There is an inherent problem with averaging. We make up formulas because they sound nice and add up to 100%, but there is no way to definitive formula for determining learning or growth. Averaging points in time, chunks of time, or data taken over time will always mask accuracy. Yet educators, like Yahoo, feel the need to try to find a formula the justify grades.
  • Using formulas to determine grades inherently leads to a focus on earning points instead of on learning content.
    In the case of Yahoo, they didn’t advertise their formula in advance. Now that I know this formula, I still don’t anticipate changing my strategy in the future because, frankly, I don’t care about my season grade. I care about winning. But students and parents are naturally going to care about grades because of the doors that grades on transcripts open or close. As long as there are final grades there will always an interest in getting good grades. When grades are the result of a formula it naturally leads to a quest for numerator points, something that may not be connected to learning. When this is the case, students ask for opportunities to earn points. When grades are a true reflection of content mastery, a focus on learning is more likely to result. In these situations, students ask for opportunities to demonstrate learning.

A Call to Action

It’s time for schools to stop being like Yahoo Fantasy Sports when it comes to our assessment and grading practices.

My Season Story grade should be an accurate reflection of where my season ended up. Along the way, I need the descriptive feedback that will enable me to make informed growth-based decisions.

Students need final grades that are accurate reflections of where they end up in the learning process. Along the way, they need appropriate descriptive feedback so they can make informed growth-based decisions, as well.

Traditional grading is rooted in decades of practice, and shifting the course of our institutional inertia to focus more appropriately on learning rather than grading will take effort and time. Schools must choose to embark on Assessment Journeys that lead to accurate feedback and descriptions of learning, mastery of content, and student growth.

Let’s get started today!

Read more…

11148393093?profile=originalI just finished watching a TED TALK by Sal Khan, founder of the Khan Academy. Sal was talking about mastery learning and the importance of building strong learning foundations before layering on additional information.

As I watched the video, I was thinking about why a stubborn 25% of most students in the upper elementary, middle, and high schools are reading two or more years below grade level.

Sal cites the example of a child who scores an average grade of 75% on a unit test. Most educators would accept 75% as an average score, and in fact most diagnostic assessments would accept 75—80% as mastery level; however, Sal points out the not knowing 25% of the test components is problematic. From the student's perspective: "I didn't know 25% of the foundational thing, and now I'm being pushed to the more advanced thing."

When students try to learn something new that builds upon these shaky foundations, "they hit a wall... and "become disengaged."

Sal likens the lack of mastery learning to shoddy home construction. What potential homeowner would be happy to buy a new home that has only 75% of its foundation completed (a C), or even 95% (an A)?

Of course, Sal is a math guy and math lends itself to sequential mastery learning more so than does my field of English-language arts and reading intervention. My content area tends to have a mix of sequential and cyclical teaching learning, as reflected in the structure of the Common Core State Standards. The author of the School Improvement Network site puts it nicely:

Many teachers view their work from a lens that acknowledges the cyclical nature of teaching and learning.  This teaching and learning cycle guides the definition of learning targets, the design of instructional delivery, the creation and administration of assessments and the selection of targeted interventions in response to individual student needs.

At this point, our article begins to beg the question: What if a shaky foundation is what we're dealing with now? We can't do anything about the past. Teachers can start playing the blame game and complain that we're stuck teaching reading to students who missed key foundational components, such as phonics. All-too-often, response to intervention teachers are ignoring shaky foundations and are trying to layer on survival skills without fixing the real problems.

Instead, teachers should re-build the foundation. Teachers can figure out what is missing in the individual student skill-sets and fill the gaps... this time with mastery learning.

Mark Pennington, MA Reading Specialist, is the author of the comprehensive reading intervention curriculum, Teaching Reading StrategiesA key component of the program is our 13 diagnostic reading assessments. These comprehensive and prescriptive assessments will help response to intervention reading teachers find out specifically which reading and spelling deficits have created a shaky foundation for each of your students. I gladly share these FREE Reading Assessments with teachers and welcome your comments and questions.

Read more…

Which do you care about more - Learning or Grading?

Educators always answer that question with Learning.  And if you've spent much time on The Assessment Network, you know that our focus is to help educators use assessment FOR the purpose of learning - rather than to help ecucators figure out new grading systems.  

So while our goal is to explore best practices related to assessment so we can increase learning, the reality is that in order to do so we must spend some amount of time examining our grading practices.  It's not that grading practices are the focus, but many traditional grading practices have a negative impact on our ability to provide the type of feedback that leads to learning and on our ability to get students to focus on learning - rather than on "earning" a grade.

One traditional grading practice that has such an impact is an overreliance on creating mathematical formulas to determine a student's grade on a particular assignment.  Based on our stated priority - Learning - we should instead be developing methods for providing descriptive feedback that helps students learn.  Instead, our profession tends to try to develop just the right formula to "calculate a grade," thereby practicing assessment for GRADING rather than assessment for LEARNING

For example, take a look a the scored rubric below.  Pretend this rubric was used in your class.  The student had an assignment that covered 4 standards or topics - 1.1, 1.2, 1.3, and 1.4.  You've scored the assignment as evidenced by the Xs in the boxes. 

Based on this rubric, what letter grade ( A, B, C, D, or F) would you think the student should receive for this assignment?

11148394093?profile=original

If you said B, then you answered the same as almost every single educator who has seen this rubric.

When educators are shown this rubric, they tend to think the student should receive a B.  After all, in 3 of the 4 standards the student was marked as being in the 2nd best (out of 5) category.  Perhaps because in one standard the student was marked in the middle category, the student might receive a B minus, if "shades of B-ness" must be used.  But most teachers would use their professional expertise to classify this student as roughly a B student on this assignment.

But, unfortunately, in an attempt to be objective, educators often find the need to "hide" behind mathematical formulas.  They choose to let fractions, rather than professional expertise, make grading decisions and choose to provide grade information rather than learning-focused feedback. 

Here's what that same rubric might look like when a formula is applied to it:

11148394255?profile=original

In this scenario the student would receive a total of 15 points (4+4+3+4) out of a possible 20.  This fraction would then be converted to a percentage and the student would receive a 75%.  Depending on the school system, this 75% would either be a C or a D.

But when we first analyzed the rubric, our professional expertise and instinct told us this student was in the B range on this assignment.  Why then would we allow a mathematical formula to tell us the student should receive a C or a D?  Why would we remove our expertise from the decision?  More importantly, though, why would we get ourselves caught up in a "grading game"?  Why would we employ practices that lead to students arguing about a grade or scrambling to earn more points when, instead, we could employ practices that provided feedback useful for learning?

Here's another way to use that same rubric:

11148394273?profile=original

By using this rubric, we prevent ourselves from getting caught up in a numbers game.  We're not arguing between 75 or 76 or 77.  It's very easy to see that, by and large, this student should be rated in the B range.  We don't need 100 different points of rating to determine that this student falls into the B range - and frankly, does it really matter where in the B range the student falls?  Because we're most interested in learning, right?  Therefore, we don't really care about the B or the 75 or whatever the grade is.  We care about providing feedback that will help a student learn, correct?

A numerical score of 75 leads to 1 of 2 things.  It leads either to:

  1. A debate about the grading system, or
  2. A request by the student to earn more points

But if we provide feedback in the form of a letter grade that is not necessarily the result of a mathematical formula, we have the potential to get students to ask questions about how they can improve their learning, especially if the letter grade feedback is attached to descriptive feedback.

What if you used a descriptive chart like the one below that was created by Math teachers at Salem High School in Salem, Virginia?

11148393670?profile=original

A chart like this one attaches a descriptive meaning to the letter grades.  The B no longer means that the student received 80-89% or 87-93% of the possible points.  Instead, we now know that:

  • In 3 of the 4 standards the student has a strong understanding but a fair number of mistakes are still being made;
  • To improve to the A level in these standards, the student needs to check his/her work and strive to reach a point of complete understanding as evidenced by little to no mistakes and the ability to lead someone else; and
  • In 1 of the standards assessed the student shows a basic understanding of the concepts but needs a lot more practice as evidenced by his/her ability to start but then the tendency to get stuck..  

The descriptions in the chart above might not be the perfect ones for your class or your grade or your school, but they are examples of feedback that is much more learning-focused than typical fraction-based grading practices.  If our goal was just sorting and selecting students, then perhaps a focus on an assessment OF learning based on fractions would suffice.  But we are in the business of unlocking human potential to help all students learn and grow.  Therefore, we need to focus on assessment FOR learning and descriptive feedback.

Please don't fall into the trap of thinking a mathematical formula is more objective than your expertise.  You know much more about learning and about your students and about their growth than a formula does.  Use your expertise to provide descriptive feedback.  Tell your students where they are and what they need to do - not so they can earn enough numerator points to raise their grade but so they can master the important content and skills you teach.

Read more…

Getting Students to Buy Into a Focus on Learning

As educators we definitely care more about Learning than we care about Grading.  So it tends to frustrate us when our students seem to only care about getting a Grade. 

Do you ever wish you could redirect your students' focus to learning?  While it's not easy to do so, it's also not impossible.  Since most students will not unilaterally change their focus, we have to make sure that:

  1. Everything we do reinforces the fact that we value Learning over Grading, and that
  2. Nothing we do encourages students to focus on Grades.

Those 2 ideas might sound overly simplified, but the ramifications are immense.  If we honestly analyze traditional assessment practices, we'll start to find that much of what we do puts a focus on getting a grade.  Even the relatively "enlightened" practice of allowing retakes can end up causing kids to focus on trying to raise their grades rather than learn content. (For more on the subject of retakes, read this previous post.)

But when a teacher gives students regular feedback that is focused on learning - rather than on grades - it is possible to train students to think, communicate, and focus in a learning-centered manner.  Below is an email that one of our teachers sent me recently.  In it she recounts a conversation with a student who exemplified a focus on learning.  I hope as you read it you can imagine the satisfaction this teacher felt (as opposed to the typical frustration we feel when students just care about grades).

So I've been talking about mastery and areas of weakness more this year with my students. I'm trying to communicate it better, and I have done different exercises with them to help them diagnose their weaknesses.

Anyways, cool moment today - I had a girl who came to me on her own willingly and took out one of the papers I gave her last week on which she diagnosed her weakness during a station review. 

She said, "Can I go in the hallway and work on my weakness?"

I said, "Well, I haven't handed back the mastery sheet yet from your test today, but of course you can.   Do you know what your weak standards are?"

She responded with, "Yes I do,  I have the paper we used last week where we did stations, and I was able to pick out what I need to work on."

Keep in mind, this is a student who is more of an typical or middle-of-the-road student, not necessarily one who would be seen as an overachiever. In other words, my talk of "mastery and weakness" is working!  :)

Awesome!  How fun it was to read this email and to celebrate with a teacher who is helping students value learning!

(For more information on how this specific teacher helps students identify areas of weakness, read this previous post.)

Read more…

At our 1/10/18 faculty meeting, teachers were asked to bring a recent and typical lesson plan with them.  Meeting in groups of 3 or 4, teachers shared the details of the lesson plans with each other.

Then a few thoughts were shared with the entire group about the relationship between Assessment and Pedagogy.  Sometimes we think of assessment as what happens after the pedagogy occurs.  The faculty was encouraged to think of assessment as part of the pedagogy itself. 

Keeping in mind that assessment is anything that results in getting and/or giving meaningful feedback, no lesson can be at its best if it doesn't include some type of assessment activity.  Learning requires the getting and/or giving of feedback.

Teachers then had a conversation in their small groups about how best to weave assessment into the lesson plan they brought with them.

Hopefully, practical conversations like this lead to productive collaboration and an increased use of meaningful assessment.  Maybe an activity like this would benefit your faculty?

Read more…

Making Every Assessment a Formative Experience

If you've spent much time on this Network you are well aware that we promote the use of formative assessment - or Assessment FOR Learning.  Formative assessments are often compared/contrasted with summative assessments.  Typically, educators use the term "formative assessment" to refer to smaller checks for understanding and use the term "summative assessment" to refer to more larger assessments such as traditional unit tests.  But to differentiate between the two can be misleading IF THE ULTIMATE GOAL IS FOR STUDENTS TO LEARN RATHER THAN FOR THE TEACHER TO BE ABLE TO DETERMINE A GRADE.

As educators make their lesson/assessment plans, they should keep this simple truth in mind: WE CARE MORE ABOUT LEARNING THAN WE DO GRADING.  If this is true, then how can we allow some assessments to help students learn while other assessments help us determine a grade?  If we care more about learning than we do grading, then shouldn't ALL assessments help students learn.  ALL assessments should have a formative purpose, right?

Last week I entered the classroom of Mark Ingerson, a 9th grade Modern World History teacher at Salem High School, to conduct a quick walk-through.  What I saw was a great example of how all assessments, even those that traditionally would be considered summative in nature, can have great formative benefit if the teacher is intentionally focused more on learning than grading.

Mark's students were taking a test in his classroom on the unit he had just finished.  The test was designed by him in Quia, and the students took it on their Chromebooks.  Mark had tagged all the questions on this unit test based on the standards they represented.  Therefore, as students finished Mark received more than just a grade; he received an instant report of how well each individual student had mastered each specific standard.

As the students finished and submitted their tests, they immediately (as if they had been trained to do this....) came up to Mr. Ingerson's desk where he, one-at-a-time, gave each of them a post-it note on which he had listed their weakest standard ON THE TEST THEY HAD JUST TAKEN.  After receiving the post-it note, the students went back to their desks to IMMEDIATELY use Quia to practice the standards they had just scored low on.

It definitely takes some work to create the infrastructure needed to provide this sort of instant feedback, and it's true that the Quia format would not work as well for all classes as it does for Mark's.  But there's no denying the simple beauty of what is occurring here:

  • The students are receiving instant and standards-based feedback.
  • The teacher is able to differentiate and personalize the relearning experience for each student.
  • The traditional summative assessment is truly a formative experience.
  • STUDENTS ARE LEARNING THROUGH THE POWER OF ASSESSMENT FOR LEARNING.

Let's remember the truth we believe:
Learning is more important than grading. 

So here's the question for you:
How can you ensure that assessments in your classroom, rather than just help you determine a grade, actually help students learn?

Read more…

Quit Focusing on Standards Based GRADING

Followers of this site know by now that Assessment FOR Learning is way more important than Assessment OF Learning.  In order to make sure our assessment and our feedback increase student learning, we need to communicate and assess in a standards based manner.

Many schools and school systems have begun their Assessment Journeys by focusing on Standards Based Grading Policies.  There are 2 key dangers of having Grading Policies as a point of focus:

  1. This puts too much emphasis on grading.
    Schools need to set the example for students that learning trumps grading.  Anything that reinforces the hyper-focus on grading that tends to motivate students will be detrimental to our goal of keeping our focus on learning.  This includes polices that create one-size-fits-all grading practices.
  2. Policy is not as valuable as professional development.
    There is no way to create a policy that addresses all possible scenarios.  However, a faculty that is well-grounded in Assessment FOR Learning philosophy can create its own logistical answers to the situations that arise.

Our friends at @CVULearns in Vermont have put together a wonderful argument for why focusing on Standards Based LEARNING is significantly more important that focusing on Standards Based GRADING.  All I can say is "Amen!"

Enjoy their thoughts here:

http://cvulearnsblog.blogspot.com/2015/09/newsflash-sbg-does-not-improve-student.html?m=1

Read more…

1/11/17 Applying SBL Philosophy

11148393456?profile=original

Three students in your class took a test that assessed 2 of your class's content standards.  Their scores are shown in the chart above.  Assume that a score of 20 for a specific standard is a score that demonstrates a high level of mastery of that specific standard.

Discuss how an SBL philosophy would impact the way you handled these results.

Leave a summary of your thoughts or your group's thoughts as a reply in the box below.

Read more…

This Assessment Network is dedicated to the concepts of AFL: Assessment FOR Learning.  In other words, the PURPOSE of assessment is for learning to occur.  It's impossible to maximize your AFL efforts if you don't assess based on content standards.  That's where SBL: Standards Based Learning comes into play.  

There's philosophy, and then there's Philosophy in Action.  When it comes to Assessment Philosophy in Action, it doesn't get any better than LOOPING.  This blog post will include all other posts from this network that are dedicated to the practice of LOOPING in the classroom.  

Read more…

A recent post on The Assessment Network titled Redos and Retakes? Sure. But don't forget to Loop! received a lot of attention via social media and led to quite a few productive discussions.  Without repeating all that was already shared in that post, the basic premise was this:

If we care about learning more than grading and if we want to communicate that to students, then we will need to understand that:

The power of assessment is greatly enhanced when Standards Based teaching and assessment practices - such as Looping - are interwoven into the daily instructional process.

This concept of Looping was juxtaposed with the common practice of allowing students to ask for Redos and Retakes. While Redos and Retakes were not directly discouraged, educators were encouraged to focus first on building reassessment into the very fabric of the learning process instead of waiting to reassess after students decide they don't like their grades.

The post and the concept of Looping generated quite a bit of feedback via social media.  A common response went something like this:

I really like the idea of Looping.  Could you share practical examples of what this might look like in a classroom?  

If you haven't read the original post yet, I would suggest doing so before moving on.  Once - or if - you have, then read below for very practical and applicable examples of Looping shared in her own words by Robin Tamagni, an Earth Science teacher at Salem High School in Salem, VA.


How do I loop in my class?

The first thing that I do is teach my Earth Science content to the best of my ability.  I try to explain and break down everything and have no assumptions that my students just ‘know what I’m talking about’.  Once I teach something, I make sure the very next day I go back and have my students practice it with one another, especially the vocabulary.  In Earth Science there is an abundance of new vocabulary that students have never heard of, so going back and practicing it every day with their partners is crucial for maintaining, establishing, and growing knowledge throughout the year.  I use partner quizzing of vocabulary words, flash cards, Quia.com and Kubbu for review games, and acronyms to help students remember the words.  This constant review and practice is Looping in its simplest form. 

Once we have taught and practiced the content, I assess my students.  Specifically, I like to use PowerSchool Assessment (formerly Interactive Achievement) so that instead of just finding an overall grade I can receive and give feedback in terms of mastery of specific content standards.  The data from the assessments shows me areas of strength and weakness for each individual student.  This is an example of what that data looks like for a student. 

11148391658?profile=original

Instead of just seeing a grade of 63%, PowerSchool Assessment provides me with more specific and standards-based feedback.  I learn that a student does better with the topic of Igneous Rocks, but struggles with Sedimentary and Metamorphic Rocks.  Therefore, I am able to focus on their problem areas so they can grow rather than waste their time and mine reteaching them everything about rocks.

As important as this standards-based data is for my decision making, it is even more important to get the data in the hands of my students so they can trained to let it guide their decision making. Training them to understand and interpret data is something I begin doing early in the school year and then am very consistent with all year long.  To help make the students' data meaningful, I give them what I call the "Weak Areas Sheet" (see example below or click link to download a Word file). 

11148391496?profile=original

On each student's Weak Areas Sheet I fill in the mastery feedback from PowerSchool Assessment into the blank for each assessed standard.  Now the students know exactly which specific topics they need to work on. 

On my classroom website, our school's other Earth Science Teacher, Wes Lester, and I have compiled a huge list of resources for practicing each specific standard.  (Visit Mrs. Tamagni's Study Center)  These practice activities include Quia "Who Wants to Be a Millionaire" games, YouTube video clips to reteach a topic, Kubbu games to practice sorting vocabulary, Purpose Games to practice labeling features of the earth, practice quizzes, etc.  Each standard has a list of these types of activities that are specifically labeled for easy access. 

11148392296?profile=original

My typical lesson planning involves giving students opportunities each week go to my website and work on their weakest areas.  Generally this looks like students taking about 15 minutes in class to login, pick 3 games in each of their weakest areas, and practice.  I ask them to complete the review game and then show me their results when they have done so.  I may use this as a Do Now/Bell Ringer activity or as an Exit Ticket activity.  If I find that I have 10 unexpected extra minutes near the end of class having my students get our their Weak Areas Sheet and doing some Looping is a practical and meaningful way to "fill that time". 

Looping in this manner also works great for students who are accelerated.  First of all, this method of assessment lets me know who has actually mastered the standards rather than just who happens to have a high grade.  It is rare to find a student who has truly mastered ALL taught standards.  However, when I do find someone who has reached this level I can let the student go ahead and start practicing standards that will be taught in the future, or I can give that student an opportunity to serve others by coaching peers who are weak in standards they're strong in. 

As the school year progresses, my looping practices expand somewhat.  By mid-year I have worked hard to create an abundance of practice stations in my room.  Each station correlates to a specific content standard. By mid-year, my students definitely know where their weak area(s) is (are).  I strategically pair students up with one another (one weak, one strong) and have them travel around my classroom to all the different stations beginning at their weakest standards.  The stronger student is coached on how to act as a peer teacher and to make sure they take the opportunity to explain and help their partner through their weaker standards. While serving as a peer coach does not come naturally to all students, if I focus on developing great relationships with my students they become more willing to work at it as a way to help me.

Opportunities for Growth

A final key component of my looping strategy involves "never letting go of the past".  Each time students take a test in my class they will always have a retest on old standards at the same time.  For example, students will take their first test on our Rocks and Minerals standards in September.  Then in October, they will take a test again on Rocks and Minerals and a separate test on Plate Boundaries.  Then in December students will take another test on Rocks and Minerals and Plate Boundaries, but this time we'll add in Earth’s History. 

This method of looping means that each time a student takes a test they have an opportunity to demonstrate growth - as opposed to just demonstrating how well they have learned (or memorized) the current content.  Let’s say a student scores a 60% on the first Rock and Mineral Test in September.  A 60% does NOT reflect what they will know about Rocks and Minerals by April.  Students are encouraged to continuously get better and grow in each standard instead of just moving on and forgetting about it. 

In October when we test again on Rocks and Minerals along with Plate Boundaries students will have worked on their weaknesses in the category of Rocks and Minerals and will hopefully show some sort of growth within that standard.  If a student has demonstrated growth I will replace the old score with their new score since that new score is now a better reflection of what they actually know.  If a student has scored the same or lower I will add that score to the grade book and use it as communication for where they need additional growth and practice.  

The Looping strategies I have described are essential to getting kids to learn and master content.  There is definitely a lot of infrastructure that must be created before it can be done well.  However, the payout is worth the effort.  

Thoughts or questions?  Feel free to leave comments below.  You can also reach Robin at her profile page on this network or email her at rtamagni@salem.k12.va.us.  Similarly, Scott can be reached at his profile page on this network or reached via email at scotthabeeb@gmail.com.

Read more…

It's Time to Take an Assessment Journey!

This network has tons of practical examples of Assessment FOR Learning, great insights into Standards Based Learning concepts, and even a bunch of Sports Analogies to help educators apply sound assessment philosophy to their classrooms.  But how can school leaders and teachers help lead assessment change in their schools and systems?

Pawel Nazarewicz (Salem High Math Teacher) and I (Scott Habeeb, Salem High Principal) wrote this article for the Fall 2016 issue of Virginia Educational Leadership to help administrators and teachers lead determine their assessment needs and then lead assessment journeys in their schools.

It's Time to Take an Assessment Journey

http://publications.catstonepress.com/i/751683-fall-2016/59

We'd love your feedback!

Read more…

As the Standards Based movement has grown, allowing students to Redo assignments and Retake tests has become a rather common practice.  Blogs, articles, books, and workshops have focused on the importance of Redos and Retakes (R/R) and how to practically implement R/R at the classroom and school level.  Divisions, schools, and teachers have created policies that detail, rather specifically, the conditions through which students might R/R assignments.

The progression from Standards Based philosophies to the practice of R/R goes something like this:

  1. Students learning content and skills is the mission, therefore, we can't be satisfied with students not learning.
  2. Since all students do not learn at the same pace, when we become aware that students have not mastered specific content standards, we should give students additional opportunities to learn those standards.
  3. Low scores/grades/marks/feedback commonly indicate that a student hasn't mastered content or skills.
  4. When students have low scores/grades/marks/feedback, we should provide them R/R opportunities so they can improve the scores/grades/marks/feedback.
  5. Improved scores/grades/marks/feedback indicate that students have learned the content and/or skills.

Based on what I have seen working with outstanding teachers in my own school (Salem High in Salem, VA) and from what I have learned as I have traveled around the country helping schools with their assessment needs, I would like to make the following recommendation:

Let's remember that R/R is not the ONLY way - and often not the best way - to implement Standards Based philosophies.

Let me clarify: I will not be suggesting in the paragraphs to come that R/R practices should stop, but that:

  • We need to make sure R/R fall in their proper and appropriate context, and that
  • Looping is a teaching and assessment practice that deserves strong consideration because it keeps the focus on learning better than most R/R practices do.

The phrase Standards Based Grading (SBG) is used quite commonly to refer to the use of Assessment FOR Learning practices based on standards.  However,  the phrase Standards Based LEARNING (SBL) is more instructionally-relevant to use since this keeps us focused on the goal and the mission of learning rather than on the significantly less important focus of grading.  

Regardless of your choice of terms - SBG or SBL - the most important aspect of the Standards Based movement is not any one specific practice but instead how educators think about assessment.  Teachers trying to grow in their use of assessment must focus first on the way they THINK about assessment rather than than on HOW they will assess or WHAT assessments they will use.  The Standards Based movement is not really about grading; it's about learning.  But the associated increase in learning is dependent on a change in thinking.  

  • If a teacher thinks about learning primarily in terms of students demonstrating mastery of individual specific standards (as opposed to students increasing their overall aggregate "average" grade) then a teacher will communicate with students and parents in terms of individual specific standards mastery.  
  • If a teacher communicates in terms of individual specific standards mastery, then students and parents are more likely to think about progress in terms of individual specific standards mastery, rather than increasing their overall aggregate average.
  • If students and parents think about progress in terms of individual standards mastery, then they are more likely to communicate in those terms, as well.

The problem with typical R/R practices is that they have a tendency to cause all of us - educators, students, and parents - to think and communicate in terms of grades rather than learning.


It's natural for students and parents to be hyper-focused on grades, and it would be unrealistic to expect them to unilaterally take steps to shift that focus to learning.  The perceived benefits and consequences of grades are too immediate and too ingrained in our culture.  If learning is ever to take its rightful place in relation to grading, it will have to be the educators in the schools who set that tone.  Anything educators do that encourages or reinforces the focus to be on grades will run counter to what we want most - to have a culture that values learning about all else.

While the typical reason educators embrace R/R is a desire for students to learn, too often the reality is that R/R reinforces the students' focus on grades above all else.  If I'm a student and I find out I have a low score/mark/feedback, my natural inclination is to consider how that impacts my grade.  When a teacher or a school or a division creates a policy that gives me the "right" to retake an assignment, what I tend to hear is that I have the "right" to increase my grade.

The underlying problem with many R/R policies is that they are examples of what could be called "After-the-Fact" Standards Based assessment.  In other words, now that we've finished this unit/topic and you have scored at a level that you (or your parent) don't approve of, you can go back and fix your grade by R/R after-the-fact.  

If you're exploring incorporating Standards Based assessment practices into your classroom, starting with figuring out an R/R policy/procedure would be a mistake.  Begin by growing in your understanding of SBL philosophy so you will be able to THINK in a Standards Based manner and be prepared to apply SBL logistics to the myriad of situations that inevitably will arise.  

The power of assessment is greatly enhanced when, rather than after-the-fact, Standards Based teaching and assessment practices - such as Looping - are interwoven into the fabric of the learning process.


Here's what happens when a teacher gains a great understanding of SBL philosophy:

  • A teacher who THINKS in terms of standards mastery will base instruction and communication on standards.  
  • Then, because the teacher THINKS this way, communicates this way, and wants to ensure that students master standards, the teacher will routinely - probably daily - assess students to gauge the level of student learning.  
  • This will cause individual standards to be assessed multiple times and, more than likely, through multiple measures.
  • Because measuring progress towards individual standards mastery is important, the teacher will want to record these measurements in a manner that allows him/her to see how each student is progressing toward each standard - rather than simply averaging all work completed.

There will come a time when the teacher will move on to new content, however, the students' progress toward past standards will remain in front of that teacher and the students as a constant reminder that some students - maybe many students - have still not mastered standards at a satisfactory level.  This leaves the teacher with 1 of 3 options:

  1. Don't worry about the standards not satisfactorily mastered.  
    This should be obviously unacceptable but needs to be included since it is a theoretical possibility.
  2. Wait until the end of the year and then go back and review past standards.
    This often helps students "cram" for an end-of-course test but does little to move learning into long-term memory.
  3. Throughout the year, continuously review previously taught concepts, content, and skills.

For the remainder of this post we will refer to this Option 3 as Looping.

The Looping concept - continuously reviewing previously taught concepts, content, and skills - is a teaching and assessment strategy with greater potential to increase student learning than R/R practices alone.  Here's why:

Looping focuses on learning while R/R tend to focus on grades.

Furthermore, Looping is teacher-driven, while R/R is often student (or even policy) driven.

As previously stated, R/R tend to happen after-the-fact once students (or parents) are unsatisfied with grades.  R/R policies in schools tend to focus on students having a right to something.  This often leads to unnecessary tension as students "exercise their rights."  However, even when tension does not occur, when R/R is the major standards based thinking is implemented, students tend to focus it primarily as a way to improve grades.

Looping, on the other hand, is all about learning.  Looping is not dependent on students (or parents) wanting, after-the-fact, to improve a grade.  Instead, Looping is teacher-driven and built into the teacher's normal planning.  It's organic, rather than after-the-fact.  It's based on the idea that repetition is essential to learning, so teachers who want students to learn will naturally keep looping back to topics that need reinforcement.  Looping doesn't require a teacher to constantly grade and re-grade assignments, a logistic that can often turn R/R into a burden.    

With Looping, the teacher controls:

  • THE WHAT: 
    Looping occurs on topics that the teacher knows - based on assessment data - need to be re-addressed and re-assessed, 
  • THE WHEN: 
    Looping occurs as frequently as the teacher's assessment data shows looping is needed,
  • THE WHO:
    Looping makes sure that all students in a class - not just those who come and ask for R/R - are continuously enhancing their skills. and
  • THE HOW:
    Looping can be happen through repeat lessons, additional practice, old questions being included on new tests, whole class activities, differentiated assignments, daily quizzes, etc.

Looping doesn't require a policy.  Looping just requires a teacher who:

  • assesses regularly,
  • knows how students are progressing toward standards mastery, and
  • understands that humans learn through repetition and practice.

So does this mean that teachers should stop allowing students to R/R assignments?  Absolutely not.  Teachers should use their professional judgement to determine when R/R are most appropriate.  But R/R must be applied in a manner that supports the philosophy of SBL, rather than as one-size-fits-all approach.  

What I'm recommending is this.  As educators who value learning above grading, let's:

  • First think in a Standards Based manner - let's think in terms of how to teach standards, assess based on standards, and organically and regularly Loop back to standards so students get maximum practice and repetition.
  • Put into daily practice the seemingly obvious fact that the more times a student encounters content, practices, and is assessed, the more likely the student is to actually learn and remember.
  • Make sure we don't allow the quest for grades to trump learning.
  • Not create policies that tie teachers' hands - such as "thou shall give a retake whenever students request one" - but instead, let's encourage teachers to use their expertise to help students learn.
  • Remember the purpose of Assessment FOR Learning - we assess so students will learn rather than assess to create grades.

Got any thoughts?

Read more…

Why do you assess your students? A teacher's answer to this question reveals much about what that teacher values.  

For example, if a teacher's answers to the question center around determining a student's grade for a report card or transcript or around figuring out how much a student learned at the end of instruction, then it's obvious the teacher places a great emphasis on grading.

On the other hand, if a teacher's answers center around providing the teacher and the student with feedback so that more appropriate instructional and learning decisions can be made, then it's obvious the teacher places a great emphasis on learning.

This post is written to provide those teachers who care more about learning than they do about grading with an analogy that will help them productively focus their assessment efforts.

At a recent Salem High School faculty meeting, SHS Welding Teacher, Joshua Graham, shared with his colleagues the assessment tools and practices that he and his fellow Trades and Industrial teachers use to help them help students learn.  He spoke about several software programs they use to assess student progress and to provide students with descriptive feedback to help them focus their study habits.  He talked about using assessment data to evaluate his teaching and to enable him to make more student-centered decisions.

The content of Josh's presentation was insightful and the strategies shared exemplary.  Near the end of it, though, he shared with us a rather simple analogy that has profoundly impacted the way I now view an educator's assessment role.

Josh shared with us that, along with other women in their church, his wife was reading a book entitled Leading and Loving It.  The book included an analogy that Josh took and applied to assessment.  It was the analogy of The Thermometer v. The Thermostat.

The Thermometer

Think about what a thermometer does.  A thermometer gives you a temperature at a certain point in time.  Let's pretend you have a thermometer in your home.  By checking its reading, you will know the air temperature in your home.

What does the thermometer do FOR the temperature in your home?  Nothing.  While a thermometer is a useful tool, it simply provides us with information.  It does nothing to alter or change that information.

The Thermostat

Now consider the thermostat.  Like the thermometer, the thermostat also checks the temperature at a certain point.  In fact, by checking the thermostat in your home you can find out the air temperature in your home just like you could with a thermometer.

But the thermostat also does something FOR the temperature in your home.  The thermostat takes the temperature, compares that to the DESIRED temperature outcome, and then makes adjustments to increase or decrease the temperature accordingly.

While a thermometer is a useful tool, a thermostat is a much more powerful tool and a much more impactful tool.  With only a thermometer you would be able to verify the fact that your house was too hot, too cold, or just right.  But with a thermostat you can actually control the temperature outcome.

Josh explained this analogy and then applied it to assessment by encouraging his colleagues to be thermostats - not thermometers.  Being a thermometer is fine if our goal for assessment is to determine a grade.  We can teach a unit of content, asses our students to see how well they learned it, record that "temperature", and move on.  

But if our goal for assessment is to increase learning, then we have to be thermostats.  The thermostat teacher is constantly assessing so he or she knows where his students - collectively and individually - are in the learning process.  Then the thermostat teacher makes the necessary adjustments in teaching so that the "temperature" changes appropriately.  The thermostat teacher trains students to be thermostats as well, always self-assessing and analyzing feedback to determine what adjustments need to be made at their end.  

Simply put, the thermometer teacher can document IF students learned.  The thermostat teacher increases learning.

So why do you assess?  If it is to increase learning, then consider how the analogy of The Thermostat might be applied to your classroom.

Thanks, Josh! 

Read more…

A Letter to the Editor from Rick Wormeli

Recently, several letter writers to the Forest City Summit, an Iowa newspaper, have disparaged standards-based grading.  Specifically, they disparaged Rick Wormeli's work in that field.  As a result, Mr. Wormeli wrote a response to those letter-writers, and the newspaper agreed to run it.

While I am personally unfamiliar with the events in Forest City Schools, IA that led to these letters being written, public arguments like this over grading issues always cause me to wonder if the school division employed too much of a top-down method of improving assessment strategies.  

At its heart, standards based learning really shouldn't be controversial.  Learning should be measured against standards and communicated in terms of standards so that grades actually represent learning and, more importantly, so teachers and students know where to focus their instructional and learning efforts.

When individual teachers implement solid and well-communicated SBL strategies, students tend to appreciate the descriptive and helpful nature of the feedback.  Students tend to appreciate knowing where their strengths and weaknesses are so that they can then focus on improving where necessary.  And typically, when students appreciate what is going on in class and feel like it helps them learn, parents are supportive.

However, when policies are implemented at the division-level and then required or mandated it is not uncommon to create controversy where none need exist.  I would encourage schools and divisions to focus on a meaningful professional development journey - to take the long view approach - instead of looking to change practices by changing policy.

Again, I do not know what exactly went on in this Iowa school district, but I do know that educators exploring the merits of standards based learning would benefit from reading Mr. Wormeli's letter.  

Here's a link to the letter in its original form on the Forest City Summit's website: 

http://globegazette.com/forestcitysummit/opinion/letter-to-the-editor/article_937be5bc-b62a-5874-aec1-d4053dfff9f3.html

Below is the same letter copied and pasted into this blog:  

To the editor:

In recent letters to the editor in the Summit, my work was mentioned as one catalyst for the shift in grading practices in Forest City Schools from traditional to standards-based grading. Many of the claims made by the authors misrepresent me and these practices, however, and I’d like to set the record straight.

Most of us think the purpose of grading is to report what students are learning, as well as how students are progressing in their disciplines  It is important for grades to be accurate, we say, otherwise we can’t use grades to make instructional decisions, provide accurate feedback, or document student progress.

These are wise assertions for grading. Nowhere in these descriptions, however, is grading’s purpose stated as teaching students to meet deadlines, persevere in the midst of adversity, work collaboratively with others, care for those less fortunate than ourselves, or to maintain organized notebooks. While these are important character attributes, we realize that none of the books or research reflecting modern teaching/parenting mentions grading as the way in which we instill these important values in our children.  

We actually know how to cultivate those values in others, but it isn’t through punitive measures and antiquated notions of grading. Author of Grading Smarter, Not Harder (2014), Myron Dueck, writes,

“Unfortunately, many educators have fallen into the trap of believing that punitive grading should be the chief consequence for poor decisions and negative behaviors. These teachers continue to argue that grading as punishment works, despite over 100 years of overwhelming research that suggests it does not (Guskey, 2011; Reeves, 2010).”

In 2012, researcher, John Hattie, published, Visible learning for Teachers: Maximizing Impact on Learning, with research based onmore than 900 meta-analyses, representing over 50,000 research articles, 150,000 effect sizes, and 240 million students.  He writes,

“There are certainly many things that inspired teachers do not do; they do not use grading as punishment; they do not conflate behavioral and academic performance; they do not elevate quiet compliance over academic work; they do not excessively use worksheets; they do not have low expectations and keep defending low quality learning as ‘doing your best’; they do not evaluate their impact by compliance, covering the curriculum, or conceiving explanations as to why they have little or no impact on their students; and they do not prefer perfection in homework over risk-taking that involves mistakes.” 

Those interested in research on standards-based grading and its elements are invited to read books written by Robert Marzano, Tom Guskey, Carol Dweck, Doug Reeves, John Hattie, Susan Brookhart, Grant Wiggins, Tom Schimmer, and Ken O’Connor. Matt Townsley, Director of Instruction in Solon Community School District in Iowa has an excellent resource collection at https://sites.google.com/a/solon.k12.ia.us/standards-based-grading/sbg-literature.

A caution about worshiping at the research altar, however: ‘Not all that is effective in raising our children has a research base. A constant chorus of, “Show me the research,” adds distraction that keeps us from looking seriously and honestly at our practices.  When we get our son up on his bicycle the first time, and he wobbles for stretch of sidewalk then crashes abruptly into the rhododendrons, we give him feedback on how to steer his bicycle, then ask him to try again. Where’s the vetted research for doing that? It’s not there, and we don’t stop good parenting because we don’t have journaled research. 

Trying something, getting feedback on it, then trying it again, is one of the most effective ways to become competent at anything. How does an accountant learn to balance the books? Not by doing it once in a trumped up scenario in a classroom. Can a pilot re-do his landings? ‘Hundreds of times in simulators and planes before he actually pilots a commercial airliner with real passengers.  How do we learn to farm? By watching the modeling of elders and doing its varied tasks over and over ourselves. How do we learn to teach? By teaching a lot, not by doing it once or twice, then assuming we know all there is. I want a doctor who has completed dozens of surgeries like the one she’s about to do on me successfully, not one who did one attempt during training.  

This is how all us become competent. Some individuals push back against re-doing assignments and tests, however, because there’s a limited research base for it, or so they claim (There’s actually a lot of research on the power of reiterations in learning). My response to the push back is: When did incompetence become acceptable? How did we all learn our professions? Does demanding adult-level, post-certification performance in the first attempt at something during the young, pre-certification learning experience help students mature?

Parents should be deeply concerned when teachers abdicate their adult roles and let students’ immaturity dictate their learning. A child makes a first attempt to write a sentence but doesn’t do it well, and the teacher records an F for, “Sentence Construction,” in the gradebook with no follow-up instruction and direction to try it again? ‘Really? We can’t afford uninformed, ineffective teaching like this. To deny re-learning and assessment for the major standards we teach is educational malpractice. Parents should thank their lucky stars for teachers who live up to the promise to teach our children, whatever it takes. 

We can’t be paralyzed by the notion put forth by Dr. Laura Freisenborg in her Nov. 25 letter of juried journals of research as the only source of credibility. Dr. Friesenborg says that there has been, “…no robust statistical analysis of students national standardized test scores, pre- and post-implementation” of the practices for which I advocate. This is disingenuous because it’s physically and statistically impossible to conduct such study, as there are so many confounding variables as to make the “Limitations of the Study” portion of the report the length of a Tom Clancy novel. We do not have the wherewithal to isolate student’s specific outcomes as a direct function of teachers’ varied and complex implementations of so many associated elements as we find in SBG practices, including the effects of varied home lives and prior knowledge. If she’s so proof driven, where is her counter proof that traditional grading practices have a robust statistical analysis of pre- and post-implementation? It doesn’t exist.

She dismisses my work and that of the large majority of assessment and grading experts as anecdotal and a fad education program, declaring that I somehow think students will magically become intrinsically motivated. This is the comment of someone who hasn’t done her due diligence regarding the topic, dismissing something because she hasn’t explored it deeply yet. Be clear: There’s no magic here – It’s hard work, much harder than the simplistic notion that letter grades motivate children.

Friesenborg diminishes the outstanding work of Daniel Pink, who’s book Drive, is commonly accepted as well researched by those in leadership and education, and she does not mention the work of Vigotsky, Dweck, Bandura, Lavoie, Jensen, Marzano, Hattie, Reeves, Deci, Ripley, de Charms, Stipek and Seal, Southwick and Charney, Lawson and Guare whose collective works speak compellingly to the motivational, resilience-building elements found in standards-based grading. Is it because she is unaware of them, or is it because their studies would run counter to her claims? Here she is distorting the truth, not helping the community.

We DO have research on re-learning/assessing (see the names mentioned above), but it’s very difficult to account for all the variables in the messy enterprise of learning and claim a clear causation. Some strategies work well because there’s support at home, access to technology in the home, or a close relationship with an adult mentor, and some don’t work because the child has none of those things. Sometimes we can infer a correlation in education research, but most of the time, good education research gives us helpful, new questions to ask, not absolute declarations of truth. When research does provide clear direction, we are careful still to vet implications thoughtfully, not dismiss what is inconvenient or doesn’t fit our preconceived or politically motivated notions.

When we are anxious about our community’s future, we want clear data points and solid facts, but teaching and learning are imperfect, messy systems, and we’re still evolving our knowledge base. Many practices have stood the test of time, of course, but it’s only a minority of them that have a strong research base. We can’t cripple modern efforts by waiting for one, decisive research report to say, “Yay or Nay.” At some point, we use the anecdotal evidence of the moment, asking teachers to be careful, reflective practitioners, and to welcome continued critique of practices in light of new perspective or evidence as it becomes available. If we’re setting policy, we dive deeply into what isavailable in current thinking and research nationwide so our local decisions are informed.  

In her letter, Friesenborg describes standards-based grading as, “radical.” Please know that it is quite pervasive with thousands of schools across the country actively investigating how to implement it or who have already done so. Most states, in fact, are calling for competency-based learning and reporting to be implemented. Friesenborg states that the Iowa State Board of Education makes standards-based learning a legislative Advocacy Priority. This is a positive thing, and SBG practices promote exactly this. We want accurate reporting. That means we separate non-curriculum reports from the curriculum reports. It helps all of us do our jobs, and it provides more accurate tools for students to self-monitor how they are doing relative to academic goals.

Such grading practices are not even close to the definition of radical. Read the observations of schooling in Greece, Rome, Egypt, Babylonia, and on through the 1700’s, the Renaissance, the 1800’s, and the 1900’s:  Grades reporting what students have learned regarding their subjects was the predominant practice. There were separate reports of children’s civility and work habits. That’s what we’re doing here with SBG, nothing else. It’s dramatically more helpful than a grade that indicates a mishmash of, “Knowledge of Ecosystems, plus all the days he brought his supplies in a timely manner, used a quiet, indoor voice, had his parents sign his reading log for the week, and brought in canned food for the canned food drive.”  In no state in our country does it say, “Has a nice neat notebook” in the math curriculum. That’s because it’s not a math principle. It has no business obscuring the truth of our child’s math proficiency.

We have plenty of research, let alone anecdotal evidence, that reporting work habits in separate columns on the report card actually raises the importance of those habits in students’ minds, helping them mature more quickly in each area. The more curriculum we aggregate into one symbol, however, the less accurate and useful it is as a report for any one of the aggregated elements or as a tool of student maturation. SBG takes us closer to the fundamental elements of good teaching and learning.

Rick Wormeli

Read more…

One of the great hurdles to moving toward a Standards Based approach to learning, teaching, grading, and communicating is the fact that our students have been conditioned to operate in a points based system.  They have been raised in a system that focuses more on earning points for grades than on standards based feedback focused on learning.

Educators and schools making the shift to SBL philosophies often develop strategies and plans for communicating SBL principles and practices to parents.  The thinking is that parents have been conditioned by the same system that has trained their children and that parents will be upset if their children are faced with a new constructs, lingo, and grading practices.  While communicating clearly with parents is important, focusing first on how to win over parents overlooks the most powerful communication ally teachers possess - students.

If students understand the goals of SBL and how it will benefit their learning, then they become powerful advocates for meaningful assessment strategies.  Students are the buffer between school and home.  We should never underestimate the importance of making sure that they understand the value of what we're doing with them in the classroom.  If they can articulate a concept appropriately, then their parents are more likely to hear about practices such as SBL in a positive manner - even if we have never directly communicated with them about those practices.

Beth Denton, a wonderful Math teacher at Salem High School, recently shared the following email with me.  The first paragraph is Beth's explanation to me.  The second paragraph is from the student to Beth.  Notice that Beth recognizes that the student is still too focused on the grade.  However, the outcome, even if influenced by a desire for a higher grade, is one that leads to a student taking ownership of learning as a result of Beth's standard based feedback.  

Here is the email I received from Beth:

An email from a concerned student. While it's still grade focused, I see hints that we're moving in the right direction.  This student knows what she needs to improve on and is looking for MASTERY of the topics!  Yay!

 
 Hi Mrs. Denton! I haven't gotten a chance to have a conversation with you about this so I thought I would send you and e-mail and come in some time next week after school to start working. My current grade in this class is an 87 and my goal for this semester is to have an A. I realize that since the points are different, my best bet is to make up some of the previous "1.0's" that I got and of course, continue to ace tests and any graded assignments. According to JumpRope, the skills that I personally need improvement on include: Determining whether figures have been rotated, dilated, or reflected, Parallel Lines cut by a transversal, and finally, The unit 5 congruent triangles portion of the test. I would like to improve my mastery on these skills not only to get my grade up, but to do well on these topics during the SOL. If you are available I plan to be staying after school as much as possible next week. So sorry for such a long e-mail haha! Thank you for understanding. 

Are you training your students to think in terms of SBL?  Are your students still coming to you chasing points, or are they, as Beth's student exemplifies, able to communicate their specific areas of strength and need based on your standards based feedback?  (By the way, JumpRope, the standards based grading system referenced by the teacher, is a phenomenal tool for standards based commmuncation.)

If this student's parent were to ask her to explain how Mrs. Denton grades, I have no doubt that the students would be able to do so in a way that would cause the parent to appreciate Mrs. Denton's instructional practices.  More importantly, this student would be able to communicate that Mrs. Denton is grading and assessing in a manner that enables her to learn. 

Read more…

As has been stated on this site before, traditional grading practices have led to a culture of "Quest for Numerator Points" in our schools and with our students.  Students have been trained and conditioned to care more about grading than learning.

We educators wish this wasn't the case, yet to complain about it makes as much sense as Sea World trainers complaining that Shamu only does tricks when rewarded with fish (more on that concept here).  In other words, while we don't like the fact that students routinely ask teachers for opportunities to earn more points, the reality is that they are doing what they've been conditioned by us to do.  

When we measure student progress in terms of points earned over points possible it just makes sense that students want more points.  The solution to this situation is to have a way to communicate how students are doing other than an average of all work completed.  The solution is to communicate student progress toward the mastery of specific standards.  The solution is Standards Based Learning.

Consider this.  A student asks how she is doing in your class.  You respond, "You're doing pretty well.  You currently have a 92."  What is the student supposed to do if she wants to improve?  The only logical response is for her to consider how many points she needs to get on the next assignment to raise her average or to think of ways to earn additional or extra points.  Either way, the focus is on earning points instead of on increasing learning.

However, if a student asks how she is doing in your class and you are able to respond by telling her how she is progressing toward specific standards, then you have the potential for students to ask more productive questions that focus on learning.  Want to see what I mean?  Here is a recent question a student at our school emailed to her teacher:    

I am having trouble with the Multiplication and Division of the radicals, is there anyway I can receive more help on trying to understand them? It's my only struggle.

Wow!  Isn't that exactly the sort of question we want students to ask?  This student understands what she knows and doesn't know because of the specific standards-based feedback she has received from her teacher.  This student has been trained to seek to master standards instead of just collect points.  The student is seeking out her teacher in a positive, productive, and meaningful manner.

Let's face it: Students will always care about grades.  We did when we were students, and it's completely understandable.  But if we want them to value learning, we - the educators - need to provide them with a new paradigm.  This is the beauty of Standards Based Learning.

Read more…

Recently, our friends at JumpRope asked Pawel Nazarewicz and me to share about our personal experiences with Standards Based Learning here at Salem High School.  We put some thoughts together which they turned into the blog post linked to this page.  

We hope members of this network will find it helpful, and we'd love to hear from you about your experiences, struggles, and successes with SBL.

Here's the link to the post: https://www.jumpro.pe/blog/standards-based-teaching-and-learning-after-year-one/

Read more…

56 Examples of Formative Assessments

David Wees, the Formative Assessment Specialist for New Visions Public Schools, has created a Google Slides presentation with 56 practical examples of formative assessments to use in the classroom.  For anyone looking for ways to expand their AFL toolbox, this is a no-brainer.

The presentation can be found at this site: https://docs.google.com/presentation/pub?id=1nzhdnyMQmio5lNT75ITB45rHyLISHEEHZlHTWJRqLmQ&start=false&loop=false&delayms=3000

Read more…

Blog Topics by Tags

Monthly Archives