All Posts (134)

Sort by

Seven Practices for Effective Learning

Check out Seven Practices for Effective Learning from the November 2005 edition of ASCD's Educational Leadership.  This is a great description of how to use assessment to promote learning.

 

Followers of this site will find the 7 practices outlined in the article to be quite familiar.  They are:

  1. Use summative assessments to frame meaningful performance goals.
  2. Show criteria and models in advance.
  3. Assess before teaching.
  4. Offer appropriate choices.
  5. Provide feedback early and often.
  6. Encourage self-assessment and goal setting.
  7. Allow new evidence of achievement to replace old evidence.

 

Read more…

Members of this site will appreciate the way middle school principal, Ryan McLane, has described the importance of Mastery Grading.  Read his Education Week article at: http://mobile.edweek.org/c.jsp?DISPATCHED=true&cid=25983841&item=http%3A%2F%2Fwww.edweek.org%2Few%2Farticles%2F2013%2F06%2F05%2F33mclane.h32.html%3Ftkn%3DZLCCITACcBCfR8CBWbOW%252BWRaeYRQ%252BrwJbqnf%26cmp%3Dclp-sb-ascd

Read more…

Why do you assess your students? A teacher's answer to this question reveals much about what that teacher values.  

For example, if a teacher's answers to the question center around determining a student's grade for a report card or transcript or around figuring out how much a student learned at the end of instruction, then it's obvious the teacher places a great emphasis on grading.

On the other hand, if a teacher's answers center around providing the teacher and the student with feedback so that more appropriate instructional and learning decisions can be made, then it's obvious the teacher places a great emphasis on learning.

This post is written to provide those teachers who care more about learning than they do about grading with an analogy that will help them productively focus their assessment efforts.

At a recent Salem High School faculty meeting, SHS Welding Teacher, Joshua Graham, shared with his colleagues the assessment tools and practices that he and his fellow Trades and Industrial teachers use to help them help students learn.  He spoke about several software programs they use to assess student progress and to provide students with descriptive feedback to help them focus their study habits.  He talked about using assessment data to evaluate his teaching and to enable him to make more student-centered decisions.

The content of Josh's presentation was insightful and the strategies shared exemplary.  Near the end of it, though, he shared with us a rather simple analogy that has profoundly impacted the way I now view an educator's assessment role.

Josh shared with us that, along with other women in their church, his wife was reading a book entitled Leading and Loving It.  The book included an analogy that Josh took and applied to assessment.  It was the analogy of The Thermometer v. The Thermostat.

The Thermometer

Think about what a thermometer does.  A thermometer gives you a temperature at a certain point in time.  Let's pretend you have a thermometer in your home.  By checking its reading, you will know the air temperature in your home.

What does the thermometer do FOR the temperature in your home?  Nothing.  While a thermometer is a useful tool, it simply provides us with information.  It does nothing to alter or change that information.

The Thermostat

Now consider the thermostat.  Like the thermometer, the thermostat also checks the temperature at a certain point.  In fact, by checking the thermostat in your home you can find out the air temperature in your home just like you could with a thermometer.

But the thermostat also does something FOR the temperature in your home.  The thermostat takes the temperature, compares that to the DESIRED temperature outcome, and then makes adjustments to increase or decrease the temperature accordingly.

While a thermometer is a useful tool, a thermostat is a much more powerful tool and a much more impactful tool.  With only a thermometer you would be able to verify the fact that your house was too hot, too cold, or just right.  But with a thermostat you can actually control the temperature outcome.

Josh explained this analogy and then applied it to assessment by encouraging his colleagues to be thermostats - not thermometers.  Being a thermometer is fine if our goal for assessment is to determine a grade.  We can teach a unit of content, asses our students to see how well they learned it, record that "temperature", and move on.  

But if our goal for assessment is to increase learning, then we have to be thermostats.  The thermostat teacher is constantly assessing so he or she knows where his students - collectively and individually - are in the learning process.  Then the thermostat teacher makes the necessary adjustments in teaching so that the "temperature" changes appropriately.  The thermostat teacher trains students to be thermostats as well, always self-assessing and analyzing feedback to determine what adjustments need to be made at their end.  

Simply put, the thermometer teacher can document IF students learned.  The thermostat teacher increases learning.

So why do you assess?  If it is to increase learning, then consider how the analogy of The Thermostat might be applied to your classroom.

Thanks, Josh! 

Read more…

Guidelines for using Ning in a school setting

My school system - City of Salem Schools, VA - has undergone a lengthy process to determine what types of social networking should be available on our system's network. Until recently, all social networks were blocked by our filter. After much discussion and exploration, it was decided that social networking would be open for all faculty members. Faculty members would be treated as professionals who are able to use social networking appropriately within the work environment. (Our Barracuda filter has made it possible for us to open up certain sites for a specific group within our system.) We also decided that access to social networking in general is not necessary for students within a school setting. In fact, it probably could lead to more harm than good. However, social networking does have educational value if used properly. Therefore, we decided that Ning would be the one social network available for use by students. Teachers have been encouraged to create Nings for use in the classroom but to follow certain guidelines to make sure that Nings can be used in a manner that maximizes safety and educational value at the same time. If you're interested in using Ning in your school system, you might be interested in checking out the guidelines that we are using. Here they are: Ning Guidelines I'd love to hear about anyone else's experiences using Ning in the school setting.
Read more…

A great reminder for students

Kudos to Salem High School math teacher, Erin Stenger, for thinking to put a sign like this right next to her doorway where students will see it each day as they leave her class.

It has been noted before on this website that for AFL to truly have its greatest possible impact, the students need to be using assessment-elicited feedback to measure their own progress and guide their own learning. Like most things that we want students to do, though, we must train them to do it. This is especially true for AFL since most students (just like most parents and most teacher) tend to look at grades from a summative position.


If we want students to view grades as feedback that guide their learning rather than just get averaged together to determine a grade, then we must 2 things:


1. We must grade and assess in a formative manner rather than just collect a bunch of scores to average.

2. We must train our students.


This picture in Mrs. Stenger's room is a subtle but important example of this. Most importantly, it reveals the fact that AFL is a core philosophy that permeates the way Mrs. Stenger runs her classroom.


Here are some other blog posts that deal with the same idea of students knowing what they know:

1. Do They Know If They Know?

2. Did AFL Guide My Instruction Today?

3. Assessment FOR Learning - A quick and easy indicator

4. AFL - It's about students taking ownership of learning

Read more…
What a privilege it is to be able to observe great educators practicing their craft!

Recently I had a chance to be in the classroom of Michelle Kovac, Salem High School's Marketing teacher. She was teaching Advanced Marketing. Two things stood out to me.

1. Mrs. Kovac did an excellent job of weaving AFL strategies and techniques into her classroom.

2. The strategies employed by Mrs. Kovac were highly successful IN PART due to the strategies themselves but MAINLY (in my opinion) due to the enthusiastic manner with which she employed them.

Let's start with the second thing I noticed - enthusiasm. In my interactions with teachers at various schools over the years I have often heard teachers bemoan the fact that while they have tried to use creative or new strategies they have been unsuccessful due to the weak level of their students. I would be overly "Pollyanna-ish" if I said that students had no bearing on the ability of a teacher to be effective. However, what I have noticed more often is that strong students mask poor teaching much more frequeently than weak students destroy great teaching.

Mrs. Kovac's Advanced Marketing class was an example of this situation. Advanced Marketing students are a diverse group. Some of them have been excellent students over the years. Some have struggled greatly. Some have had no disciplinary issues while others have had quite a few. Here's what they have in common, though. They are seniors in the spring - a time when seniors can be difficult to motivate.

I was amazed at what I saw in class that day. Mrs. Kovac's enthusiasm for the content was absolutely infectious. She acted as though Marketing was the coolest thing in the world, and as I sat in her class I began to to agree! She was a cheerleader, an entertainer, and a motivator - and the kids appreciated it. It was obvious that this was who she was in class on a daily basis because the kids thought it totally normal. Try faking enthusiasm on an occasional basis and students will see right through you.

The atmosphere is Mrs. Kovac's class was almost the way I envision an elementary classroom. What I mean is that these kids - these seniors - were excited to be there. They laughed. They joined in. When it was time to start working on projects they actually got up and RAN to get their supplies. One kid begged Mrs. Kovac to let her correct her quiz from the day before - not for points, not for a higher grade, just to be able to be correct. Mrs. Kovac finally "relented" and gave the student "permission" to correct her quiz!

When one student asked a particular question Mrs. Kovac said, "I feel a song coming on!" The entire class broke into a song about marketing. Seniors in high school willingly singing a song about Marketing in class - wow! That's what enthusiasm can do. It's what Parker Palmer describes in his book, The Courage to Teach. A teacher can lift up a class with his or her enthusiasm if the teacher has the courage to step out from behind the wall of safety that educators often erect. The courage that Mrs. Kovac showed to be herself, to be enthusiastic, and to share her love of her content is what made the assessment strategies she used work so well.

Here are the strong assessment strategies used that day by Mrs. Kovac:

Do Now Assignment - Predict Your Score
On the smart board were the numbers 3, 7, and 5. There were also 3 statements: "Guessed Correctly", "Guessed Wrong - Scored Higher", and "Guessed Wrong - Scored Lower". Students had to match a number with a statement. The day before students had taken a quiz and had predicted what their grade would be based on how well they had prepared for the quiz. For this day's Do Now assignment students had to match the numbers with the correct phrase. In other words they were trying to figure out that 3 students had correctly predicted their grades, 7 students had guessed wrong but scored higher, and 5 students had guessed wrong and scored lower.

So what are the assessment strengths here? Mrs. Kovac was training her students to analyze their preparation which in turn should help her students understand the role that preparation has in a student's success. This sort of feedback will hopefully encourage students to prepare more effectively in the future. Going back and analyzing how accurate their predictions were should help this knowledge sink in even more. It also gave Mrs. Kovac an opportunity to build them up by (enthusiastically) pointing out that they tended to underestimate themselves.

Why Did You Miss What You Missed?
When Mrs. Kovac handed back the students' quizzes she asked them to go over them and write down next to each question they missed why they missed it and what messed them up. She was not going to go over the quizzes with them that day. Instead, she told them that she first wanted to collect their feedback on why they missed what they missed. She told them that this feedback could alter how she goes over the quiz with them. She wanted it to be a learning experience rather simply listing out correct answers. When she went over the quiz with them the next day she wanted to be able to reteach/explain to them what they NEEDED to hear so they wouldn't miss the question next time around. This was a great example of a teacher collecting assessment data to guide instruction. She also told the students that she wanted them to get feedback for themselves so that they could ask appropriate questions. (By the way, this was when the one student begged to be able to correct her quiz.)

Analyzing the Competency List
Marketing classes teach based on a Marketing competency list the same way other courses might teach specific state or national standards. Mrs. Kovac had her students pull out their competency lists. The fact that they all had them and quickly pulled them out spoke volumes! Then they went through the competencies that they had recently covered and each student rated each of those competencies on a scale of 1-5 based on how well the student understood the specific competency. These students were fully involved in analyzing their own progress. Their competency list was becoming a study guide for the end of the year and a way for them to take ownership of their studies. Mrs. Kovac's students obviously did this sort of activity regularly because they were very familiar with the competency list. One of them even pointed out that she had forgotten to mention 2 of the competencies they had covered. Another kid excitedly pointed out that they were almost done with the list. When Mrs. Kovac (enthusiastically) asked, "Doesn't it feel good?" A chorus of students answered, "Yes!"


Mrs. Kovac's classroom is a good example of small ways to use AFL strategies to give students ownership of their own progress. Would those strategies work in any classroom? Yes - but they will work BEST when coupled with genuine enthusiasm.

Read more…

New Terminology: Scoring v. Grading

After studying Assessment FOR Learning pretty intensely for the past few school years, I am now beginning to think that we might do ourselves a favor if we would change some of our terminology.  Specifically, I think it's time to stop using the words "grading" or "grade" as often as we do and replace them - at times - with "scoring" or "score".

 

You don't have to go very far down the AFL road to realize that traditional grading practices often get in the way of our attempts to use AFL strategies.  Traditional grade books and grading strategies typically average together all of a student's grades for the grading period to determine a final grade.  Therefore, practice assignments such as homework and classwork will have an impact on the student's grade.  Since the concept of assigning lots of practice so that students and teachers can receive the feedback necessary to increase learning is central to AFL (see Heart of AFL), averaging practice grades into a student's overall grade becomes obviously problematic.  What if the additional practice helps a student learn but also lowers the student's grade?  The natural reaction to this problem is for teachers to feel that they should not grade practice assignments.  For more on this topic see:

So the philosophy of AFL naturally leads to teachers feeling as though they should not grade practice assignments.  This is where Newton's third law of motion comes into play: "To every action there is always an equal and opposite reaction."  When students realize that some things are graded and some things are not, they react by asking before most assignments, "Is this going to be graded?"  Implied in their question is the idea that if the answer is "Yes" then they will work harder than if the answer is "No".  As a result, teachers are reluctant to not grade assignments - even if they agree with the philosophy of practice assignments not lowering a grade - for fear that students won't work hard and, therefore, won't learn as much. 

 

So we're left with a quandary.  We don't want to let practice impact the student's final grade but we want students to work on each assignment as though their final grade depended on it.  Part of this quandary is of our own making.  As explored previously in What we WANT students to do v. What we TRAIN students to do, we wish that students worked for the love of learning but we then use points and grades as a Sea World trainer uses a fish.  It's difficult to argue that students should not be motivated by grades when we, in turn, use grades as motivators.  We have to find a new way.  Perhaps our new AFL philosophy requires some new terminology.

 

What would happen if we started "scoring" all assignments and "grading" only a few?  The term "grading" implies the following:

  1. The teacher will assess how well the student did on the assignment.
  2. The student will receive feedback on well they have mastered the content.
  3. The grade will go into the grade book to be used to help determine the student's final grade.
In most classrooms, "grading" is the only tool the teacher has - or uses - for providing feedback.  There is an old adage that describes this problem: "When the only tool you have is a hammer, every problem looks like a nail."  

 

"Scoring" could be the new tool needed to help us out of our quandary.  The difference between scoring and grading is in implication #3 from the list above.  Both scoring and grading provide the teacher with feedback and both provide the student with feedback.  However, a score on an assignment may or may not be used by the teacher to determine the final grade.  Here's how I envision scoring working in a typical AFL classroom:

  1. The teacher assigns practice everyday.
  2. The teacher provides feedback on all practice.  While this feedback is often provided very informally, the majority of feedback given formally is in the form of a score.
  3. The score looks very similar to a grade.
  4. The score goes into the grade book.
  5. The students understand up front that the teacher will be looking over all of a student's scores - and grades - to determine what the appropriate final grade is for the student.  While graded assignments are the few that will definitely count toward the final grade, they will be much fewer in number than the scored assignments.  Rather than being tied down to averaging all graded assignments, the teacher who uses scoring will now be able to study the evidence and arrive at the most appropriate final grade.

The point here is that every score counts toward helping the teacher determine a grade.  When students ask, "Is this graded," what they really means is, "Does this count?"  With scoring, the answer to that question is:

"Yes, it counts.  Everything counts.  As the teacher, I will be analyzing ALL the evidence - just like a good detective - before arriving at a conclusion (your grade).  How it counts could be different for each of you, depending on how you perform, but ALL assignments count."

Scoring satisfies our desire to be AFL-ish:

  • teachers receive feedback
  • students receive feedback
  • practice doesn't have to lower - or overly inflate - the final grade

At the same time, scoring doesn't entice students to fall into the trap of only working "when it counts."

 

What do you think?

 

Read more…

Good Teaching is a Lot Like Coaching a Mule!

If you’re reading this and are from Salem, VA you might have a clue what that statement means.  If you’re not from Salem you’re probably wondering what the heck I’m talking about.

A little background: High school football is a pretty big deal in Salem, just like it is in so many small towns across our country.  But in Salem, Virginia, football might be a slightly bigger deal than in most towns.  The Salem Spartans have had great success on the football field for many years.  In the past 26 years, they’ve won 17 district, 13 region, and 6 state championships.  There have been many reasons for that success, but one consistent throughout all those successful years has been the tough blue-collar play of the offensive line.  And that offensive line is collectively known as The Mules.

From 2000-2004, I had the privilege of being a coach in Salem’s football program.  I was at the bottom of the totem pole – middle school assistant coach – but it was tons of fun to work with the kids and to learn from the amazing coaches in the system.  Willis White, the Virginia High School League Hall of Famer, was the head coach for the high school team.  He liked to remind me that he had “holes in his underwear older than me.”  A buddy I coached with once told me that Larry Bradley, the head coach of our middle school team, had forgotten more football than I’d ever know.  There were so many excellent coaches in the program, but the one I learned the most from was Billy Miles, the coach of The Mules.

As a former high school offensive lineman who had already coached a middle school offensive line at another school, I thought I had a pretty good grasp of what it took to be a successful offensive line coach.  When I was hired to coach for Salem, I sat down with Coach Miles so that he could teach me how to coach Mules.  As I came to appreciate the way The Mules were coached, I also came to realize that I had a lot to learn! 

The Mules were taught higher level thinking.  They learned rules and philosophies which they then applied to the thousands of different situations they might encounter.  They made calls, did combo blocks, read the defense, talked to one another, and changed their plans and assignments all within a matter of seconds depending on how the defense was lined up.  Their ability to apply their knowledge was evident on Friday nights (and Saturdays in the playoffs) as game after game the Spartans were able to pound the ball behind The Mules and right down the opponent’s throat.  Coach Miles was a phenomenal coach and an even better teacher.  I had a never been around a high school offensive line that operated on that cerebral a level. 

As Coach Miles taught me how to play like a Mule – a prerequisite to being able to coach Mules – I’m sure he could see that I was getting excited.  I couldn’t wait to take all that I was learning and share it with the Mules of the future at our middle school.  It was then that Coach Miles reminded me of something: Before I could get my players to this high level of play, I had to make sure they mastered the basics. 

Coach Miles told me to stay away from teaching them how to read the defense and make calls until they could first get in a stance and could make the appropriate first step.  And he told me that I needed to refresh the basics with my players every single day. 

So that’s what we did.  We practiced getting into a three point stance.  We did it over and over again until they had mastered it.  Then we did it some more.  We got into three-point stances and took our first steps until they could do it in their sleep.  Then we did that some more.  As the season went on, we practiced the basics a little less than we did earlier in the year.  After all, I needed to teach my guys how to apply their knowledge to game situations, and that takes time.  But there wasn’t a single practice where we didn’t focus at least a little bit on the basics.

Occasionally I would go up to the varsity practice field to watch Coach Miles and his Mules.  Man, did they work hard!  If you think Marine Corps drill sergeants are tough, you must never have watched Coach Miles!  He made the Marine Corps look like Sunday School!  But Coach Miles loved his Mules and they respected him.  It was a joy to watch them put into practice all he had taught them. 

But even The Mules practiced the basics every day.  That’s right – The Mules, who could read a defense and adjust their blocking schemes in a matter of seconds, still practiced getting in their stance and taking the correct first step every single day. 

Coach Miles never assumed they were beyond the basics.  Therefore, there was no way they would ever forget the basics.  You really can’t become a cohesive and dominant offensive line if you don’t have the basics down.  I suppose Coach Miles could have skipped the basics and assumed that as a varsity coach he was above that.  I suppose when a player messed up the basics he could have bemoaned the woeful and inadequate coaching the players received at the middle school level.  But instead, Coach Miles recognized that without the basics his team would never get to the higher level.  Instead, he took the responsibility on himself to make sure his players had the skills they needed.

See how this applies to teaching? 

Great teaching is just like coaching The Mules.  The goal is higher level thinking.  The goal is to take what has been learned and then apply it to new situations.  But it all starts with the basics.  Students newer to the content (like my middle school linemen) need more time focusing on the basics, but ALL STUDENTS need to continually refresh the basics if they’re going to truly reach mastery.  Students more familiar with the content might not need as much time refreshing the basics, but they still need to revisit them to some degree on a regular basis.

I think sometimes we educators overlook the importance of the basics.  We feel like we have too much to cover to spend time going over the basics.  For example, an Algebra 2 teacher might not feel like he should have to focus on addition, subtraction, multiplication, division, and using fractions.  A US History teacher might not feel like she has time to focus on vocabulary not directly related to her state standards.  A Science teacher might not feel as though she should have to routinely revisit proper lab procedures.  And these teachers would be (in my opinion) incorrect.

I routinely hear teachers point out that a lack of basic skills prevents students from mastering their course content.  I think those teachers are correct.  Students often lack the basics which in turn prevents higher level mastery.  My response, though, is to copy Coach Miles.  If the basics are what are preventing students from having success, then focus on the basics.  Perhaps part of the reason students are lacking the basics is that we have a tendency to assume they should have been learned already and as a result move away from them.

Consider your own content area for a moment.  Would your students have more success if they had a better level of mastery of the basics?  Would you have a greater chance of helping them reach higher levels of application if they knew the basics better?  If the answer to either or both question was “yes” then the only acceptable next step (assuming you want your students to have success and to reach higher levels of application) is to figure out how to work regular reviews of the basics into the fabric of your classroom.

This is how you coach The Mules, and this is how you teach students.

Read more…

A Letter to the Editor from Rick Wormeli

Recently, several letter writers to the Forest City Summit, an Iowa newspaper, have disparaged standards-based grading.  Specifically, they disparaged Rick Wormeli's work in that field.  As a result, Mr. Wormeli wrote a response to those letter-writers, and the newspaper agreed to run it.

While I am personally unfamiliar with the events in Forest City Schools, IA that led to these letters being written, public arguments like this over grading issues always cause me to wonder if the school division employed too much of a top-down method of improving assessment strategies.  

At its heart, standards based learning really shouldn't be controversial.  Learning should be measured against standards and communicated in terms of standards so that grades actually represent learning and, more importantly, so teachers and students know where to focus their instructional and learning efforts.

When individual teachers implement solid and well-communicated SBL strategies, students tend to appreciate the descriptive and helpful nature of the feedback.  Students tend to appreciate knowing where their strengths and weaknesses are so that they can then focus on improving where necessary.  And typically, when students appreciate what is going on in class and feel like it helps them learn, parents are supportive.

However, when policies are implemented at the division-level and then required or mandated it is not uncommon to create controversy where none need exist.  I would encourage schools and divisions to focus on a meaningful professional development journey - to take the long view approach - instead of looking to change practices by changing policy.

Again, I do not know what exactly went on in this Iowa school district, but I do know that educators exploring the merits of standards based learning would benefit from reading Mr. Wormeli's letter.  

Here's a link to the letter in its original form on the Forest City Summit's website: 

http://globegazette.com/forestcitysummit/opinion/letter-to-the-editor/article_937be5bc-b62a-5874-aec1-d4053dfff9f3.html

Below is the same letter copied and pasted into this blog:  

To the editor:

In recent letters to the editor in the Summit, my work was mentioned as one catalyst for the shift in grading practices in Forest City Schools from traditional to standards-based grading. Many of the claims made by the authors misrepresent me and these practices, however, and I’d like to set the record straight.

Most of us think the purpose of grading is to report what students are learning, as well as how students are progressing in their disciplines  It is important for grades to be accurate, we say, otherwise we can’t use grades to make instructional decisions, provide accurate feedback, or document student progress.

These are wise assertions for grading. Nowhere in these descriptions, however, is grading’s purpose stated as teaching students to meet deadlines, persevere in the midst of adversity, work collaboratively with others, care for those less fortunate than ourselves, or to maintain organized notebooks. While these are important character attributes, we realize that none of the books or research reflecting modern teaching/parenting mentions grading as the way in which we instill these important values in our children.  

We actually know how to cultivate those values in others, but it isn’t through punitive measures and antiquated notions of grading. Author of Grading Smarter, Not Harder (2014), Myron Dueck, writes,

“Unfortunately, many educators have fallen into the trap of believing that punitive grading should be the chief consequence for poor decisions and negative behaviors. These teachers continue to argue that grading as punishment works, despite over 100 years of overwhelming research that suggests it does not (Guskey, 2011; Reeves, 2010).”

In 2012, researcher, John Hattie, published, Visible learning for Teachers: Maximizing Impact on Learning, with research based onmore than 900 meta-analyses, representing over 50,000 research articles, 150,000 effect sizes, and 240 million students.  He writes,

“There are certainly many things that inspired teachers do not do; they do not use grading as punishment; they do not conflate behavioral and academic performance; they do not elevate quiet compliance over academic work; they do not excessively use worksheets; they do not have low expectations and keep defending low quality learning as ‘doing your best’; they do not evaluate their impact by compliance, covering the curriculum, or conceiving explanations as to why they have little or no impact on their students; and they do not prefer perfection in homework over risk-taking that involves mistakes.” 

Those interested in research on standards-based grading and its elements are invited to read books written by Robert Marzano, Tom Guskey, Carol Dweck, Doug Reeves, John Hattie, Susan Brookhart, Grant Wiggins, Tom Schimmer, and Ken O’Connor. Matt Townsley, Director of Instruction in Solon Community School District in Iowa has an excellent resource collection at https://sites.google.com/a/solon.k12.ia.us/standards-based-grading/sbg-literature.

A caution about worshiping at the research altar, however: ‘Not all that is effective in raising our children has a research base. A constant chorus of, “Show me the research,” adds distraction that keeps us from looking seriously and honestly at our practices.  When we get our son up on his bicycle the first time, and he wobbles for stretch of sidewalk then crashes abruptly into the rhododendrons, we give him feedback on how to steer his bicycle, then ask him to try again. Where’s the vetted research for doing that? It’s not there, and we don’t stop good parenting because we don’t have journaled research. 

Trying something, getting feedback on it, then trying it again, is one of the most effective ways to become competent at anything. How does an accountant learn to balance the books? Not by doing it once in a trumped up scenario in a classroom. Can a pilot re-do his landings? ‘Hundreds of times in simulators and planes before he actually pilots a commercial airliner with real passengers.  How do we learn to farm? By watching the modeling of elders and doing its varied tasks over and over ourselves. How do we learn to teach? By teaching a lot, not by doing it once or twice, then assuming we know all there is. I want a doctor who has completed dozens of surgeries like the one she’s about to do on me successfully, not one who did one attempt during training.  

This is how all us become competent. Some individuals push back against re-doing assignments and tests, however, because there’s a limited research base for it, or so they claim (There’s actually a lot of research on the power of reiterations in learning). My response to the push back is: When did incompetence become acceptable? How did we all learn our professions? Does demanding adult-level, post-certification performance in the first attempt at something during the young, pre-certification learning experience help students mature?

Parents should be deeply concerned when teachers abdicate their adult roles and let students’ immaturity dictate their learning. A child makes a first attempt to write a sentence but doesn’t do it well, and the teacher records an F for, “Sentence Construction,” in the gradebook with no follow-up instruction and direction to try it again? ‘Really? We can’t afford uninformed, ineffective teaching like this. To deny re-learning and assessment for the major standards we teach is educational malpractice. Parents should thank their lucky stars for teachers who live up to the promise to teach our children, whatever it takes. 

We can’t be paralyzed by the notion put forth by Dr. Laura Freisenborg in her Nov. 25 letter of juried journals of research as the only source of credibility. Dr. Friesenborg says that there has been, “…no robust statistical analysis of students national standardized test scores, pre- and post-implementation” of the practices for which I advocate. This is disingenuous because it’s physically and statistically impossible to conduct such study, as there are so many confounding variables as to make the “Limitations of the Study” portion of the report the length of a Tom Clancy novel. We do not have the wherewithal to isolate student’s specific outcomes as a direct function of teachers’ varied and complex implementations of so many associated elements as we find in SBG practices, including the effects of varied home lives and prior knowledge. If she’s so proof driven, where is her counter proof that traditional grading practices have a robust statistical analysis of pre- and post-implementation? It doesn’t exist.

She dismisses my work and that of the large majority of assessment and grading experts as anecdotal and a fad education program, declaring that I somehow think students will magically become intrinsically motivated. This is the comment of someone who hasn’t done her due diligence regarding the topic, dismissing something because she hasn’t explored it deeply yet. Be clear: There’s no magic here – It’s hard work, much harder than the simplistic notion that letter grades motivate children.

Friesenborg diminishes the outstanding work of Daniel Pink, who’s book Drive, is commonly accepted as well researched by those in leadership and education, and she does not mention the work of Vigotsky, Dweck, Bandura, Lavoie, Jensen, Marzano, Hattie, Reeves, Deci, Ripley, de Charms, Stipek and Seal, Southwick and Charney, Lawson and Guare whose collective works speak compellingly to the motivational, resilience-building elements found in standards-based grading. Is it because she is unaware of them, or is it because their studies would run counter to her claims? Here she is distorting the truth, not helping the community.

We DO have research on re-learning/assessing (see the names mentioned above), but it’s very difficult to account for all the variables in the messy enterprise of learning and claim a clear causation. Some strategies work well because there’s support at home, access to technology in the home, or a close relationship with an adult mentor, and some don’t work because the child has none of those things. Sometimes we can infer a correlation in education research, but most of the time, good education research gives us helpful, new questions to ask, not absolute declarations of truth. When research does provide clear direction, we are careful still to vet implications thoughtfully, not dismiss what is inconvenient or doesn’t fit our preconceived or politically motivated notions.

When we are anxious about our community’s future, we want clear data points and solid facts, but teaching and learning are imperfect, messy systems, and we’re still evolving our knowledge base. Many practices have stood the test of time, of course, but it’s only a minority of them that have a strong research base. We can’t cripple modern efforts by waiting for one, decisive research report to say, “Yay or Nay.” At some point, we use the anecdotal evidence of the moment, asking teachers to be careful, reflective practitioners, and to welcome continued critique of practices in light of new perspective or evidence as it becomes available. If we’re setting policy, we dive deeply into what isavailable in current thinking and research nationwide so our local decisions are informed.  

In her letter, Friesenborg describes standards-based grading as, “radical.” Please know that it is quite pervasive with thousands of schools across the country actively investigating how to implement it or who have already done so. Most states, in fact, are calling for competency-based learning and reporting to be implemented. Friesenborg states that the Iowa State Board of Education makes standards-based learning a legislative Advocacy Priority. This is a positive thing, and SBG practices promote exactly this. We want accurate reporting. That means we separate non-curriculum reports from the curriculum reports. It helps all of us do our jobs, and it provides more accurate tools for students to self-monitor how they are doing relative to academic goals.

Such grading practices are not even close to the definition of radical. Read the observations of schooling in Greece, Rome, Egypt, Babylonia, and on through the 1700’s, the Renaissance, the 1800’s, and the 1900’s:  Grades reporting what students have learned regarding their subjects was the predominant practice. There were separate reports of children’s civility and work habits. That’s what we’re doing here with SBG, nothing else. It’s dramatically more helpful than a grade that indicates a mishmash of, “Knowledge of Ecosystems, plus all the days he brought his supplies in a timely manner, used a quiet, indoor voice, had his parents sign his reading log for the week, and brought in canned food for the canned food drive.”  In no state in our country does it say, “Has a nice neat notebook” in the math curriculum. That’s because it’s not a math principle. It has no business obscuring the truth of our child’s math proficiency.

We have plenty of research, let alone anecdotal evidence, that reporting work habits in separate columns on the report card actually raises the importance of those habits in students’ minds, helping them mature more quickly in each area. The more curriculum we aggregate into one symbol, however, the less accurate and useful it is as a report for any one of the aggregated elements or as a tool of student maturation. SBG takes us closer to the fundamental elements of good teaching and learning.

Rick Wormeli

Read more…

A Sports Analogy for Assessment

On page 96 of the book "A Repair Kit for Grading", the author (Ken O'Connor) draws a useful
analogy between performance-based assessment and a band or a sports team:


"It is critical that both teachers and students recognize when assessment is primarily for learning (formative) and when it is primarily of learning (summative). Students understand this in band and in sports, when practice is clearly identified and separate from an actual performance or game."


If we follow this analogy, then the final exam for a unit and/or course becomes the big game for
the sports team. If you are training basketball players, don't you think that the best way to test their abilities is to have them play a game? In this way the coach sets out the big game as the final exam, and in the same way all of the activities that lead up to that game are meant to help the players prepare for that game.


The diagnostic assessment is an initial activity that puts students in a simulated game to see what their strengths and weaknesses are. Once they have been identified, the formative assessments are the practice sessions that help students refine specific technical skills, build leadership skills, raise stamina and work on team building, all necessary for each player to perform at his/her best and for the team to win.


Note that in this case,


• All of the players clearly understand what is expected of them by the time the big game comes
around.

• All of them understand what their individual and collective strengths and weaknesses are and are motivated to improve their skills in order to support the team.

• The coach wants the players to do their best and pushes the players to practice hard so they can do so.

• The team knows that the practices don't give them points in the final game, and for that reason its the game that counts and not the practices, although the more they practice the better they will play in the game. After the big game, the team evaluates its performance, draws up new strategies to improve and starts practicing again.


Designing a multi-stage, complex performance task as the final exam allows teachers to identify
all of the discrete skills students will need to perform well at the end so they can be practiced in low-stakes situations, tried out in scrimmage games and practiced again so that everybody feels ready for the big game. This movement back and forth between instruction and applying, between drilling discrete skills and performance of the whole task is what helps students learn well. It also helps them learn to learn, which is a capacity that comes in handy as the students take on further personal and academic responsibilities.


Although teachers don't give the same or similar tests more than once as coaches do, we do teach more complex skills that build on what students had to learn for the previous exam. In this way the capacities teachers aim to develop in our students by the end of the semester or year are complex and broad.


This analogy has provided me with a variety of new perspectives on assessment as well as some criteria to evaluate my own assessment strategies. I have become a better teacher by practicing this concept and I hope it gives others some valuable insight too.

Read more…

Grading (as it relates to AFL)

Grading and assessment are two distinct yet overlapped topics.  This site is dedicated primarily to assessment - the getting and giving of feedback that helps teachers adjust their teaching and students adjust their learning.  However, it is impossible to talk about assessment without occasionally discussing grading.  Therefore, grading posts and resources pop up on this site from time to time.  As a way to help members find these resources, this blog post has been created as to serve as a collection of grading links.  Anything posted on this site related to grading can be found on this blog.

 

Also, please note that as more examples are added to this site, they will also be added to this blog.


Videos:

 

Blog Posts:

Pictures:

Stories in the News:

Faculty Meeting Conversations

  • 11/12/14 - Pretend You're A Grade Coach
  • 2/24/16 - Using AFL/SBL to Analyze a Common Assessment Practice: Earning Points Back on a Test
  • 1/11/17 - Applying SBL Philosophy
Read more…

Response to a Parent (from Rick Wormeli)

There's no other way to put it...  This is good stuff!

As schools and teachers adopt the philosophies of Assessment FOR Learning, it's only natural that grading practices will begin to change. (Click here for more info on grading as it relates to AFL.)  We need to realize that some of those changes will seem strange to some of the parents of our students.  It's important that we can articulate why we make the grading decisions we do.

Rick Wormeli has composed an excellent and thoughtful response to concerns a parent had about grading practices that reflect the philosophy of AFL and Standards Based Grading.  This response is recommended reading for all teachers. Not only will it prepare you to respond to people in your community who might question your practices, it might also help you explain to colleagues who are confused by such practices as well.

Click on the following link to read Rick's response: http://www.adams12.org/files/learning_services/Wormeli_Response.pdf

Read more…

Assessment for Learning/Grading

Interesting ideas, there are those who would say if an assessment is graded (which is not the same as scored (rubrics)) it is probably not used by the student for learning as well as it could be.
Research from Wiliam would support this idea.
I have always maintained that is more a function of the culture in the classroom.
One thing is for certain if instructors will use more assessment that is not graded they will eventually get more buy in from students that assessment's major function is continuous improvement.
I would love to hear other points of view.
Read more…
As a teacher, have you ever experienced anything similar to the following scenario: You teach your course content over a period of time. The day before your big test you have a review activity of some sort. The review activity is a good one. It goes well, but during the activity you realize that your students don’t know the material all that well. Considering the number of days you spent covering it, you would have thought they would have known it better by now. The next day on the test the students end up doing fairly well – but probably not as well as they could have done. If you have experienced a situation like this then you have experienced a situation in which AFL has been used but not to its fullest extent. If kids did better on the test than they did the day before on the review, then they have obviously used the feedback from the review to guide their studying. That is AFL at work. But what if the kids had come in on the review day already knowing the content as well as they did on the test day? If that had been the case, then the review day could have been an opportunity to go even further with the content, to master it even better, or to apply it in new ways. AFL strategies could have been used to make this happen. AFL assessment strategies could be used along the way to help learning “sink in and stick.” I would encourage you to consider assessing more frequently so that students are more frequently engaged with the content and regularly (daily) analyzing their understanding. By the time the review comes along, they should already know what they know and know what they have yet to master. This would be the ideal learning situation. Here are some strategies that IF USED FOR THIS PURPOSE could be helpful AFL practices: 1. A short daily quiz – The same quiz could even be given on multiple days. It doesn’t have to count much. It might not count at all. On a daily basis, though, the students have a chance to analyze what they know and what’s important. Students need to be informed that this is the purpose of the daily quiz or else they will just see it as another assignment. 2. Rubric for students to check – This idea will be described more elaborately in a future post. For now, what if students had a rubric of important information? Each day they could have time in class to rate how well they know the content. This would allow them to daily assess themselves and to daily review material. 3. Exit questions – Each day students could have a few questions to answer at the end of class. They could find the answers in their notes which would cause them to look back over what they had learned. Never end a class by simply ending notes. Always have students go back over what was covered and analyze how well they know the key points. 4. Do Now about the previous day – Students could start each day with a Do Now (Anticipatory Set) that requires them to look back at what they learned the day before. None of these strategies are unique to AFL, and I doubt any of them sound all that revolutionary to a teacher with any experience. Remember – AFL isn’t about what strategies you use as much as HOW and WHY you use them. This is what causes a teaching strategy to become an AFL tool. You are assessing students frequently in a manner that allows the students to use the feedback to guide their learning. That’s AFL.
Read more…

Sometimes when you're learning a new skill or trying to figure out how to apply a new philosophy, it helps to watch that skill or philosophy being used or implemented in a totally different arena.  Thinking outside the box and adopting new ideas can be difficult when you're extremely familiar with your own domain.  Observing the skill or philosophy at work in someone else's domain is less threatening.  Once you are able to see the benefit of the skill or the power of the philosophy it might be easier to figure out how to include it into your personal realm of familiarity.

I think this might hold true for the application to the classroom of the philosophies of Assessment FOR Learning, Standards Based Grading, and Measuring Student Growth.

Below is a recent article Sports Illustrated article about the Oklahoma City Thunder's Kevin Durant.  As I read it I was struck by just how much sense it makes to assess for the purpose of learning (not grading), to grade and assess based on standards, and to intentionally and meaningfully measure growth.  It just makes so much sense when it comes to improving in life, as evidenced by this article about Durant's attempts to improve as a basketball player.  I wonder why it doesn't always make sense in the classroom where we educators are working tirelessly to get students to improve?

Read the article below for yourself, and as you do, pay attention to the intentional steps Kevin Durant has taken to improve his shooting.

  1. He is constantly - daily - assessing himself.
  2. He has broken down shooting into "standards" based on different locations on the floor.
  3. He is using the feedback from the assessments to determine what "standards" he needs to practice and where he needs to grow.
  4. His improvement is constantly being charted so that he and his personal trainer/shot doctor/video analyst/advance scout can keep adjusting the learning plan for maximum growth.

It just makes so much sense for him to do this.  Durant wants to grow, and this is how one intentionally sets out to grow.  

Likewise, it makes sense to me that every teacher would want to:

  1. Constantly - daily - assess students.
  2. Break down learning into standards based on content knowledge and skills.
  3. Use assessment feedback to determine which standards individual students need to focus on in order to grow.
  4. Constantly chart improvement so that learning plans can be adjusted for maximum growth.

So read the article below, look for the examples of Assessment FOR Learning, Standards Based Grading, and Measuring Student Growth, and then consider how you could better apply them to your classroom.

HOW 'BOUT THEM APPLES?

Copied from http://www.sportsillustrated.com and written by Lee Jenkins (@SI_LeeJenkins)

On the day after the Heat won their 27th game in a row, Kevin Durant sat in a leather terminal chair next to a practice court and pointed toward the 90-degree angle at the upper-right corner of the key that represents the elbow. "See that spot," Durant said. "I used to shoot 38, 39 percent from there off the catch coming around pin-down screens." He paused for emphasis. "I'm up to 45, 46 percent now." Durant wore the satisfied expression of an MIT undergrad solving a partial differential equation. You could find dozens of basic or advanced statistics that attest to Durant's brilliance this season-starting with the obvious, that he became only the seventh player ever to exceed 50% shooting from the field, 40% from three-point range and 90% from the free throw line-but his preferred metric is far simpler. He wants what Miami has, and he's going to seize it one meticulously selected elbow jumper at a time.

The NBA's analytical revolution has been confined mainly to front offices. Numbers are dispensed to coaches, but rarely do they trickle down to players. Not many are interested, and of those who are, few can apply what they've learned mid-possession. Even the most stat-conscious general manager wouldn't want a point guard elevating for an open jumper on the left wing and thinking, Oh no, I only shoot 38% here. But Durant has hired his own analytics expert. He tailors workouts to remedy numerical imbalances. He harps on efficiency more than a Prius dealer. To Durant, basketball is an orchard, and every shot an apple. "Let's say you've got 40 apples on your tree," Durant explains. "I could eat about 30 of them, but I've begun limiting myself to 15 or 16. Let's take the wide-open three and the post-up at the nail. Those are good apples. Let's throw out the pull-up three in transition and the step-back fadeaway. Those are rotten apples. The three at the top of the circle-that's an in-between apple. We only want the very best on the tree."

The Thunder did not win 27 straight games. They did not compile the best record. Durant will not capture the MVP award. All he and his teammates did was amass a season that defies comparison as well as arithmetic. They scored more points per game than last season even though they traded James Harden, who finished the season fifth in the NBA in scoring, five days before the opener. They led the league in free throws even though Harden gets to the line more than anybody. They posted the top point differential since the 2007-08 Celtics, improving in virtually every relevant category, including winning percentage. Their uptick makes no sense unless Durant was afforded more shots in Harden's absence, but the opposite occurred. He attempted the fewest field goals per 36 minutes of his career. He didn't even take the most shots on his team, trailing point guard Russell Westbrook, and he seemed almost proud that his 28.1 points per game weren't enough to earn the scoring title for the fourth consecutive year. "He knows he can score," says Thunder coach Scott Brooks. "He's trying to score smarter."

Durant is lifting Oklahoma City as never before, with pocket passes instead of pull-ups, crossovers instead of fadeaways. He remains the most prolific marksman alive, unfurling his impossibly long arms to heights no perimeter defender can reach, but he has become more than a gunner. He set career marks in efficiency rating, assists and every newfangled form of shooting percentage. "Now he's helping the whole team," says 76ers point guard Royal Ivey, who spent the past two seasons with the Thunder. "Now he's a complete player." The Thunder are better because Durant is better. Of course, the Heat will be favored to repeat as champions, and deservedly so. But Oklahoma City has been undercutting conventional wisdom for six months.

NBA history is littered with stars who languish in another's shadow, notably Karl Malone, Charles Barkley, Patrick Ewing and Reggie Miller through the Michael Jordan reign. Oklahoma City lost to Miami in the Finals last June, and Durant will surely be runner-up to LeBron James in the MVP balloting again. Durant is only 24 and is as respectful of James as a rival can be, but he's nobody's bridesmaid. "I've been second my whole life," Durant says. "I was the second-best player in high school. I was the second pick in the draft. I've been second in the MVP voting three times. I came in second in the Finals. I'm tired of being second. I'm not going to settle for that. I'm done with it."

"I'm not taking it easy on [LeBron]. Don't you know I'm trying to destroy the guy every time I'm on the court?"

Justin Zormelo doesn't have a formal title. He is part personal trainer and part shot doctor, part video analyst and part advance scout. "He's a stat geek," Durant says, expanding the job description. Zormelo sits in section 104 of Oklahoma City's Chesapeake Energy Arena, with an iPad that tells him in real time what percentage Durant is shooting from the left corner and how many points per possession he is generating on post-ups. After games, he takes the iPad to Durant's house or hotel room and they watch clips of every play. Zormelo loads the footage onto Durant's computer in case he wants to see it again. "If I miss a lot of corner threes, that's what I work on the next morning before practice," Durant says. "If I'm not effective from the elbow in the post, I work on that." Zormelo keeps a journal of their sessions and has already filled two notebooks this season. Last year Zormelo noticed that Durant was more accurate from the left side of the court than the right, and they addressed the inconsistency. "Now he's actually weaker on the left," Zormelo says, "but we'll get that straightened out by the playoffs."

Zormelo, 29, was a student manager at Georgetown when Durant was a freshman at Texas, and they met during a predraft workout at Maryland that included Hoyas star Brandon Bowman. Durant embarked on his pro career and so did Zormelo, landing an internship with the Heat and a film-room job with the Bulls before launching a company called Best Ball Analytics in 2010 that has counted nearly 30 NBA players as clients. Zormelo kept in touch with Durant, occasionally e-mailing him cutups of shots. They bonded because Zormelo idolizes Larry Bird and Durant does, too.

Durant left a potential championship on the table in 2011, when Oklahoma City fell to Dallas in the Western Conference finals. About two weeks after the series, Durant scheduled his first workout with Zormelo in Washington, D.C. "I didn't sleep the night before," Zormelo remembers. "I was up until 4 a.m. asking myself, What am I going to tell the best scorer in the league that he doesn't already know?" They met at Yates Field House, where Georgetown practices, and Zormelo told Durant, "You're really good. But I think you can be the best player ever." Durant looked up. "Not the best scorer," Zormelo clarified. "The best player." It was a crucial distinction, considering Durant had just led the league in scoring for the second year in a row yet posted his lowest shooting percentage, three-point percentage and assist average since he was a rookie. He was only 22, so there was no public rebuke, but he could not stand to give away another title.

"He was getting double- and triple-teamed, and in order to win a championship, he needed to make better decisions with the ball," says former Thunder point guard Kevin Ollie, now the head coach at Connecticut. "He needed to find other things he could do besides force up shots. That was the incentive to change his pattern." Over several weeks Zormelo and Durant formulated a written plan focusing on ballhandling, passing and shot selection. They were transforming a sniper into a playmaker. Growing up, Durant dribbled down the street outside his grandmother's house in Capitol Heights, Md. He played point guard as a freshman at National Christian Academy in Fort Washington. He watched And1 DVDs to study the art of the crossover. "Where I'm from, you got to have the ball," Durant says. "That's how we do it. We streetball." But he sprouted five inches as a sophomore, from 6'3" to 6'8," and suddenly he was a forward. Though his stroke didn't suffer, his handle did. "I still had the moves," Durant insists, "but I dribbled way too high."

He could compensate in high school, and even during his one season at Texas, but the NBA was changing to a league where the transcendent are freed from traditional positions and boundaries. When Portland was deciding between Durant and Ohio State center Greg Oden before the 2007 draft, Texas coach Rick Barnes copped a line that Bobby Knight used when the Blazers were debating between Jordan and center Sam Bowie in 1984. "He can be the best guard or he can be the best center," Barnes told G.M.'s. "It doesn't matter. Whatever you need, he'll do." The Trail Blazers selected Oden and Durant was taken second by Seattle, where coach P.J. Carlesimo started him at shooting guard. "Kevin could be all things," Carlesimo says, but back then he was too gangly to hold his spot or protect his dribble. Brooks replaced Carlesimo shortly after the franchise relocated to Oklahoma City the following season and wisely returned him to forward.

In the summer of 2011, as the NBA and its union were trying to negotiate a new collective bargaining agreement, Durant created an endless loop of YouTube videos with his preposterous scoring binges at East Coast pickup games. What the cameras didn't show were the drills he did during daily 6 a.m. workouts at Bryant Alternative High School in Alexandria, Va., with Zormelo pushing down on his shoulders to lower his dribble. Durant even tried to rebuild his crossover, but when the ball kicked off his high tops, he hurled it away in frustration. "I'm never really going to use this!" he hollered.
But at all those pickup games, he asked to play point guard, and in downtime he watched tapes of oversized creators like Bird and Magic Johnson. "Opponents are going to do anything to get the ball out of your hands," Zormelo told him. "They're going to make you drive and pass." Durant could typically beat double teams simply by raising his arms. Even though he is listed at 6'9", he is more like 6'11", with a 7'5" wingspan and a release point over his head. The only defenders long enough to challenge his jumper aren't normally allowed outside the paint. "Most guys can't shoot over the contested hand," says Brooks. "Not only can Kevin shoot over it, he uses it as a target. If anything, it lines him up." Durant didn't distinguish between good and bad shots, because through his eyes there was no such thing as a bad one. Every look was clean. "I had to tell him, 'If you have a good shot and I have a good shot, I want you to take it,'" Brooks says. "'But if you have a good shot and I have a great shot, you have to give it to me.'"
Ballhandling drills begat passing drills. Durant saw what the Thunder could accomplish if he took two hard dribbles and found an abandoned man in the corner. With Zormelo's research as a guide, Durant identified his sweetest spots at both elbows, both corners and the top of the key. From those happy places, he is doing the Thunder a disservice if he doesn't let fly, but outside of them he prefers to probe. He moves a half step slower so he can better see the floor.

This season Durant is averaging two fewer field goals and nearly two more assists than he did in 2011, and he has practically discarded two-point shots outside 17 feet. Brooks tells him on a near nightly basis, "KD, it's time. I need you to shoot now." Says Brooks, "To extend the apple metaphor, I'm now able to put him all over and get fruit." He isolates Durant at the three-point line, posts him up and uses him as the trigger man in the pick-and-roll. When defenders creep too close, Durant freezes them with a crossover at his ankles or deploys a rip move that former Thunder forward Desmond Mason taught him four years ago to pick up fouls.

"Remember when tall guys would come into the league and people would say, 'They handle like a guard!' but they never actually did handle like a guard?" says Thunder forward Nick Collison. "Kevin really does handle like a guard." Durant has become both facilitator and finisher, shuttling between the perimeter and the paint, stretching the limits of what we believe a human being with his build can do. If his progression reminds you of someone else's, well, that's probably not an accident.

"I've given up trying to figure out how to stop him," says Rivers. "And I'm not kidding."
Durant was 17 when LeBron James invited him into the Cavaliers' locker room at Washington's Verizon Center after a playoff game against the Wizards. "That's my guy," Durant says. "I looked up to him, and now I battle him." In a sense, the 2011 lockout was a boon for the NBA because it allowed the premier performers to explore new boundaries. James fortified his dribble, and so did Durant. James developed his post skills, and so did Durant. James studied his shot charts, vowing to eliminate inefficiencies, and so did Durant. James already passed like Magic, but Durant started to pass like Bird. They hopped on parallel evolutionary tracks, advancing in the same manner at the same time. When a quote from James is relayed-"He's my inspiration. We're driving one another"-Durant nods in approval. It's as if the finest poets in the world are also each other's muses.
"I don't watch a lot of other basketball away from the gym," Durant says. "But I do look at LeBron's box score. I want to see how many points, rebounds and assists he had, and how he shot from the field. If he had 30 points, nine rebounds and eight assists, I can tell you exactly how he did it, what type of shots he made and who he passed to." Durant and James take flak for their friendship, but it is based on a mutual appreciation of the craft. They aren't hanging out at the club. They are feverishly one-upping each other from afar. "People see two young black basketball players at the top of their game and think we should clash," Durant says. "They want the conflict. They want the hate. They forget Bird cried for Magic. A friend was getting on me about this recently, and I said, 'Calm down. I'm not taking it easy on him. Don't you know I'm trying to destroy the guy every time I go on the court?'"
Oklahoma City beat Miami in Game 1 of last year's Finals and trailed by only two points with 10 seconds left in Game 2. Durant spun to the baseline and James appeared to hook his right arm, but no foul was called and Durant's shot bounced out. The Thunder did not win again, but Durant stood arm-in-arm with Westbrook and Harden at the end of the series, a tableau of defeat but also of a boundless future. Not one was over 23. Durant and Westbrook had already signed long-term contract extensions, and Harden was still a year from restricted free agency. But on Oct. 27, Oklahoma City had not agreed to an extension with Harden and sent him to Houston in a trade that threatened the very culture Durant built. For a player who attended four high schools, spent one year at Texas and one in Seattle, the Thunder signified the stability he lacked. "People tell you it's a business, but it's a brotherhood here," Durant says. "We draft guys and we grow together. We build a bond. When James left, we had to turn the family switch off."
In the first meeting after the deal, Brooks told his players, "We're not taking a step back." But everywhere else they heard otherwise. "My cousin texted me, 'I'm a Heat fan now, but I still hope you make it to the Finals,'" Durant recalls. "That's my family! That's my cousin!" He shakes his head at a small but lingering act of betrayal. "A lot of friends from home were talking about other teams, and I thought they were on our side. I don't want to be angry or bitter, but it started to build up, and I took it out on my teammates." Previously, if power forward Serge Ibaka blew a box-out, Durant would tell him, "It's O.K. You're going to get it next time." But the stakes had risen. "You want to get to the Finals again, and you think everything should be perfect, and it's not," Durant says. "So I'd scream at him and pump my fist."
Durant has picked up 12 technical fouls this season, more than twice as many as his previous career high, and he was ejected for the first time, in January, after arguing with referee Danny Crawford. "I'm rubbing off on him," says Thunder center Kendrick Perkins, who keeps a standing 2 a.m. phone call with Durant every night to discuss the state of the team. "He's getting a little edge on." The techs dovetailed neatly with Nike's "KD is Not Nice" marketing campaign, but they still don't fit the recipient. Even after the ejection, Durant stopped to high-five kids sitting over the tunnel. "People get it confused and think you have to be a jerk to win," he says. "But we all feed off positive energy. I'm a nice guy. I enjoy making people happy and brightening their day. If someone asks me for an autograph on the street, I don't want to wave him off and tell him, 'Hell, no.' That's not me. The last few months I've calmed down and had more fun. We can still get on each other, but there's another way."

Without Harden, Oklahoma City needed a new playmaker, and Durant had spent more than a year preparing for the role. He just didn't realize it at the time. "They were looking for somebody else to move the defense and handle the ball in pick-and-roll," says a scout. "It turned out to be him." When Durant was 20, the Thunder asked him to act 25, and now that he is nearly 25, the plan for his prime has come to fruition. He is the NBA's best and perhaps only answer for James. "I've given up trying to figure out how to stop him," said Celtics coach Doc Rivers. "And I'm not kidding."

On Nov. 24, four weeks after Harden left, the Thunder were a respectable but unremarkable 9-4 and nursing a five-point lead with one minute left in overtime at Philadelphia. Durant posted up on the right wing, bent at the waist, a step inside the perimeter. Dorell Wright, the unfortunate 76er assigned to him, planted one hand on Durant's rib cage and another on his back. "What do I tell a guy in that position?" asks an NBA assistant coach. "I shake his hand and say, 'Good luck.'"

Durant faced up against Wright, tucked the ball by his left hip and swung his right foot behind the arc, toe-tapping the floor like a sprinter searching for the starting block. Durant had scored 35 points, but on the previous possession he fed Westbrook for a three, and on the possession before that he set up a three by Kevin Martin, who had arrived from Houston in the Harden trade. It was time for the Durant dagger, but before he shimmied his shoulders and unfurled his arms he spotted guard Thabo Sefolosha, ignored in the left corner. Sefolosha was 1 for 6, and in the previous timeout Durant had told him, "You're going to make the next shot." Durant could have easily fired over Wright and finished the Sixers, but he let his mind wander to the ultimate destination, seven months away. I'm going to need all these guys to get to the Finals, he thought.

Durant took one dribble to his left, and center Lavoy Allen rushed up to double him at the free throw line. He dribbled twice more, to the left edge of the key, and two other Sixers slid over. Surrounded by four defenders, Durant finally shoveled to Sefolosha, so open that he feared he might hesitate. He didn't. Durant jabbed him in the chest as the ball slipped through the net.

How about them apples?

Read more…

While assessment and grading are two distinct topics, they often intertwine.  Occasionally something comes along to remind us that poor grading practices can end up negating effective assessment practices.  That's why Allen Iverson is on this site - to remind us that we need to give and assess practice, but to remember that when we do so, we're just talkin' 'bout practice!  That's also why we have this video about a player who becomes the best tailback ever but can't start because his poor practices earlier in the season were counted against him.  Now the world of sports has brought us another example of how allowing practice grades to average into the overall grade can give a misleading perspective.  

 

Thanks to AFL member, Dr. Keith Perrigan, for sending us this softball story from Tri-Cities.com.  It's about Kelsey, a high school softball player, who, after a great season last year, had an almost season-long batting slump this year.  However, in the last few weeks of the season her bat came alive.  As a result, her team has a great chance to win the state championship.  (Read the full article here: http://www2.tricities.com/sports/2011/jun/10/prep-softball-nave-bearcats-ms-june-ar-1098004/)

 

At the time of the article, Kelsey's batting average was .265.  Not terrible, but not exactly the stuff of all-stars.  However, based on her ability - as demonstrated in the past - and based on her incredible run at the end of the season, she would be anybody's pick for a spot on an all-star team.  In fact, she'd be a no-brainer all-star except for one thing - her batting average.  Softball doesn't allow for a batting average to start over once a player gets hot; therefore, it's not uncommon for a batting average to tell an incomplete, or even incorrect, story.  Kelsey is the hottest player in the league, but her batting average is, well, average!  Should her coach player her?  Should other teams pitch around her?  If they're smart, the answer is "yes".  If they put all their stock in an average, then the answer is a very foolish "no".

 

So why do we educators put so much stock in averages?  We know they often don't tell accurate stories.  We know they rarely indicate the true measure of a student's learning.  We know that they also distort the impact of our teaching on students' learning.  Yet when push comes to show we will often swear by them.  We will cling to the argument that the average produced in our grade book is the absolute truth when it comes to a student's performance.  We will be offended and become indignant when someone suggests that a student's grade should something other than the average we derived.

 

Why is this?  Why do we cling to averages?

 

I suppose that part of the reason is that it's what has always been done.  Perhaps using a grading system that doesn't rely on averaging together a bunch of grades might seem too radical to some.  I guess there is also a certain amount of comfort and safety in relying on an average.  If a student or parent complains about a grade, the teacher can always use the grade book average as a justification. 

 

But what if Kelsey's coach decided to bench her?  What if his coaches in the past had always played the players based on batting average?  What kind of coach would he be?  Probably a fired one.  While batting averages are fun for us sports junkies, they aren't a reliable resource upon which to make all coaching decisions.  The same is true for grade book averages.  They might provide some useful data or feedback, but they are not a reliable enough resource upon which to base our grading decisions.  Teachers should feel free to act like Kelsey's coach.  Use the batting average as feedback, but assign a grade based on mastery - not solely on the average.  

 

Who is in charge of the team - the coach or the batting average?

 

Who is in charge of the classroom - the teacher or the grade book average?

 

Any thoughts?

 

Read more…
As I have come to comprehend better what Assessment FOR Learning truly means and how its principles can be applied, I find myself regularly thinking about how I would do things differently if I were still in the classroom. After recently observing Paola Brinkley, one of our school’s Spanish teachers, I realized yet another former practice of mine that I would now change. It’s in the area of reviewing for a test or a quiz, and I think teachers of all content areas will benefit from creating their own version of Mrs. Brinkley’s practice for their classrooms. As a teacher, my methods of reviewing for quizzes and tests were fairly typical of many classrooms. I basically did one of two things: 1. Played a basketball review game: This game was always fun. The kids and I both enjoyed it. If a student paid careful attention to each question I asked then they would have heard almost every question on the upcoming test/quiz. While it definitely was possible for all students to get a decent review from this method, in hindsight it had some drawbacks: a. Because I asked one student at a time a question, there was almost never 100% participation – or anything even close to that. b. I was not able to precisely gauge who knew what or what overall problems students were having with the material. Yes, I knew that the kid who kept wanting to answer questions knew it all, and I could safely assume that certain kids knew very little. However, I would not have been able to say with certainty the areas of strengths and weaknesses that the class shared. c. The students left the room having enjoyed class, but they didn’t necessarily leave with a greater incentive to study or with a specific plan for studying. 2. Handed out a review sheet for students to complete: Some years I graded the review sheet. In hindsight I definitely would change that practice. It really doesn’t make sense now to me to grade a review sheet. I understand the point of view that says that the grade might be an incentive for doing the review, but grades should reflect mastery more than be used as incentives (or punishments for not doing work). If a student didn’t do the review that wouldn’t necessarily reflect on his or her level of mastery. These review sheets generally consisted of all the questions on the test. While some students definitely completed the review and thereby raised their test grade, I wonder how much of what I was doing was encouraging memorizing the answers to specific questions rather than truly mastering content. Also, this method of review didn’t let me know how my students were doing in time to help them prepare for the test/quiz since I collected the review on the day of the test/quiz. Finally, I wonder how many students viewed this as a study guide v. just another assignment that just has to be done. How many students simply copied answers from a book or notes rather than really tried to study? Or worse, how many students copied a friend’s review sheet? While the review game and the review sheet are practices with instructional value, I believe that their effectiveness pales in comparison to what I saw Paola Brinkley do in her Spanish 2 classroom recently. Mrs. Brinkley had a quiz coming up the next day. Her objective was to review the conjugation of certain types of verbs. Each student numbered a sheet of paper 1-25. Each student also had a small whiteboard (approx. 8” x 6”) and a dry erase marker. 25 verbs were shown 1 at a time on the overhead. The students would write their conjugation on their whiteboard and hold it up so that Mrs. Brinkley could see it. As she looked around the room she would nod to them as she saw their correct answers. Then she would go over each answer basing her explanation on the answers she had seen written on the whiteboards. Students would then write on their numbered paper the verb, whether or not they got it right, and any other information about its conjugation that they needed to remember. At the end of the class period and after having gone through all 25 verbs, Mrs. Brinkley reminded the students that their numbered sheet of paper was now their own personalized study guide for the next day’s quiz. I’m sure you can see the simplicity in this activity, and, hopefully, you can think of some ways to replicate it in your own classroom with your own content. As you do, I think it’s important that you remember the key AFL factors present in this review: 1. 100% Engagement – The students really appeared to enjoy writing on the whiteboards. This activity lends itself to a high level of engagement which means the teacher will get maximum feedback, as opposed to the one-at-a-time feedback I received during my basketball review or the not-at-all feedback I received from my review sheets. 2. Feedback for the Teacher - AFL is a process by which a teacher gains feedback that impacts his or her instruction. By seeing all of the answers at one time from each student, Paola was able to shape her review based on their needs. For example, several times throughout the class period she reminded the students that they would lose points the next day if they did not use accent marks. She knew to remind them of this from the fact that they were not using accents on their whiteboards. She also stopped several times and went into greater depth explaining verbs with which the students seemed to have the greatest difficulty. 3. Feedback for the Students - I think the most powerful aspect of AFL is when students themselves are given feedback that they can use to guide their own personal learning. Sometimes students are intimidated by the idea of studying because in their minds it means go back over every single thing they’ve learned. This seems like too large a task to complete, so many don’t even try to start it. It also wouldn’t be a very efficient way to study. After all, why spend time studying something you have truly mastered? Each student left the class that day with a personalized study guide – something that Mrs. Brinkley wisely reminded them. Whether or not the student chooses to use the study guide is one thing, but each student received the feedback they needed to know exactly how to focus their studying. Surely this will increase the odds that students will study, and most important, it should guide learning. Mrs. Brinkley's students (as the 6th of the 6 Key AFL Ideas states) knew what they needed to know so they could know if they knew it. This simple and easy-to-apply activity captured the essence of AFL – teachers and students basing teaching and learning on feedback that they are receiving from assessments. I wish I could go back and use a version of it in my World History classes. I would encourage you to consider how you might apply it to your content area. Any thoughts?
Read more…
When faced with a new concept it is natural and necessary to attach meaning to that concept. Sometimes when we find an understandable example of that concept we begin to confuse that idea for the concept itself. As Salem High School and the City of Salem Schools strive to master the concepts of Assessment FOR Learning, it is understandable that this will happen to some degree.

For example, earlier in the year we at SHS discussed a strategy of having a final test grade or portions of a final test grade replace the quiz grades that led up to that test. (Read about that here.) This method made the quizzes into practice assignments that prepared the student for the test. I began to receive some feedback from people saying that AFL wouldn't apply to their classes because this strategy for whatever reason did not fit into their classroom or teaching style. While this was a good example of AFL, it was just an example. AFL is bigger than any one practice, which led to this post on that topic.

Similar questions have arisen over time in regard to various other procedures that have been held up as examples of AFL. My post on philosophy v. procedures attempted to deal with the fact that AFL is much bigger than any one procedure.

Recently I have received feedback that shows that the practice of allowing students to retake tests and quizzes is being seen as the crux of AFL. While I have heard from many teachers who have used retakes as a way to allow students to learn from feedback, as was the case with tests replacing quizzes, AFL is bigger than retakes.

To help illustrate this I thought it might be useful to describe how AFL might have impacted my own classroom - if I hadn't left the classroom 6 years ago for the dark side of the force (administration)! :)

In my 9th Grade World History classroom my assessments and my grading were very closely related. While many of my graded assessments were AFL-ish (although I had never heard of AFL back then) I realize that I did not do enough assessing solely for the purpose of learning rather than grading. Here's how I assessed/graded:

1. Almost Daily Homework Assignments - 10 pts/assignment
Each assignment directly prepared students for the quiz the next day.

2. Almost Daily Quizzes - 30 pts/quiz
Often the same quiz was given several days in a row so that students could master the content.

3. Almost Weekly Tests - Range of 100 pts/test to 500 pts/test
Tests would build on themselves. A 100 pt test might cover Topic A. A 200 pt test might cover Topics A and B. A 300 pt test might cover Topics A, B, and C, and so on... By the time we got to the larger tests the students tended to have mastered the content because they had been quizzed and tested on it over and over - not to mention what we had done in class with notes, activities, videos, debates, etc.

So what would I do differently now that I have spent so much time grappling with AFL? Here are the changes:

1. Change in point values:
  • Homework would still be given but would either not count for points or all homework assignments would add up to one homework grade of approximately 30 points. Another idea I have contemplated would be that at the end of the grading period students with all homework completed would get a reward, perhaps a pizza party, while students with missing assignments would spend that time completing their work.
  • Quizzes would still be given almost daily but would now only count 10 or 15 points each. In addition, if a student's test grade was higher than the quizzes that led up to it I would excuse the quiz grades for that student.
  • Tests would count more. In the class I taught the tests were used as the ultimate gauge of mastery learning. The tests would continue to build on themselves but would probably start somewhere around 300 points and build up to around 800 points.
2. Change to How Quizzes are Viewed:
  • To build on the point I made above, the quizzes would be excused if the student's test grade was higher. The quizzes would be considered practice grades. Students would be trained to not fret about quizzes but to instead use them as ways to gauge their learning. I might even borrow Beth Moody's GPS idea occasionally and allow students to retake an occasional quiz; however, this would probably not be the case for most quizzes since whenever possible I would be repeating quizzes anyway.
  • The goal of quizzes would be to practice for the test. In the past I viewed the quizzes more as grades unto themselves. The problem with this, though, was that if I had four 30 pt quizzes before a 100 pt test, then the quizzes added up to more than the test. Adding in the four or five 10 point homework assignments further got in the way. Yes, they were assessments that helped the students learn, but they also had an inappropriate impact on the grade. They could help the student master the content as evidenced by the high test score while simultaneously lowering the student's grade.
3. Students Assessing Their Own Progress:
  • If I was in the classroom today I would add an entire new element of students assessing themselves. I would want students to take control of their own learning. and to know what they do and don't know. I would then want them to use that knowledge to guide their own studies.
  • One thing I would do would be to make sure that everyday (if possible) the students and I would both receive feedback. As I prepared my lessons I would ask myself the questions posed in this earlier post.
  • When I reviewed with students for tests I would change my method and adopt a strategy similar to this one used by Paola Brinkley and many other teachers in our building. (I would probably find a way to turn it into a game since I love playing games in class.)
  • At the beginning of each unit/topic I would give students a rubric like the one in this post. At some point during most class periods I would have the students use the rubric to assess themselves and see how well they are mastering content. They would then use the rubric as a study guide as described in the post.
  • I would also have students analyze their grades regularly so that they would know how well they needed to do on a test to reach their grade goal. (Implied in this is the fact that I first would have students regularly set goals.) I would use a strategy similar to this one used by Lewis Armistead.

Notice that my new plan for my classroom doesn't look incredibly different from my old one. I am assessing daily - which I was already doing - but I have changed my view on grading - it's no longer primary as it once was. Assessing is now different from and more important than grading. I have added more opportunities for students to assess themselves.

Notice that retaking tests was not a part of my AFL plan. Students are already taking multiple tests on the same content. Those tests are building in point value so that if you master it by the end that is outweighing your performance at the beginning. You are also being quizzed regularly and regularly assessing yourself. There really isn't a need for retaking the test. (Please realize that this does not mean that retaking tests should be frowned upon. It simply isn't the only way to use AFL.)

So does this mean that the plan I have outlined is how AFL should be done? NO NO NO NO NO! It's how AFL could be done. It is guided by AFL philosophies and ideas, but those same ideas could lead to very different procedures in other classrooms and with other content. AFL is big enough to go beyond certain practices and instead guide all good instructional practices.

Any thoughts?
Read more…

Educators steal from one another all the time.  It's how we get better.  

 

The following post is stolen from Matt Townsley (@mctownsley) and his MeTA Musings Blog - a great resource for AFL and Standards Based Grading ideas.  You can read the article in its entirety at: http://mctownsley.blogspot.com/2013/04/sbg-is-more-than-teach-test-reassess.html

 

In working with educators around the country I have found that most seem to cognitively grasp the concepts of Assessment FOR Learning.  However, many have difficulty putting those concepts into practice.  I think this often happens as a result of not planning with AFL in mind from the get go.  Sometimes educators will teach as they have always taught and then try to attach AFL principles after the fact.  To be most effective, AFL should be embedded into the lesson from the beginning.  For some folks that might mean starting over from scratch.  For others that might mean a few tweaks.  But it must be intentional.

 

The following post that I copied from MeTA Musings gives an example of how not to practice AFL.  Mr. Jones tries to apply AFL after the fact to this Math classroom.  It then gives an example of how a lesson or unit might look if AFL is there from the start.  This image (https://salemafl.ning.com/photo/sbgflowchart) goes along with the example below.

 

I'd love to hear any feedback you might have.

 


 

Here's an example of how SBG should not work in a middle school math class:

Mr. Jones teaches the area of a triangle on Monday and assigns some practice problems to complete in and outside of class.  Some of the students complete all of the practice problems.  Some of them do not. All students are provided the answers ahead of time on the board.  Mr. Jones teaches the area of a circle on Tuesday and assigns some practice problems to complete in and outside of class.  Again, students are provided the answers to the practice problems ahead of time.  Some of the students complete the practice problems and some do not.  On Wednesday, Mr. Jones gives all students a quiz on these two standards.  After Mr. Jones looks at the quizzes, he sees that about half of the class still doesn't understand how to find the area of a triangle or the area of a circle.  He thinks to himself, "Well, I'm really glad we have standards-based grading, because these students can reassess."  The next day, he hands back the quiz and tells students what they need to do before they can participate in a reassessment.  When only a few students show up for a reassessment opportunity during the next week, Mr. Jones becomes flustered and wonders why students aren't taking advantage of reassessments.

When I look at the visual above and think about Mr. Jones' SBG practices, I believe he's missing the "classroom feedback and informal assessment" part of the flowchart.  Mr. Jones appears to think standards-based grading is merely teaching, testing and offering reassessment opportunities

Here's an example of what SBG might look like in a middle school math class:

Mr. Johnson teaches the area of a triangle on Monday.  Before he assigns some practice problems, he asks each student to complete a problem on their small whiteboard and hold it up in the air.  Mr. Johnson can quickly see which students are still struggling to understanding the concept.  Rather than assigning everyone the same practice problems to complete it and outside of class. Mr. Johnson makes a quick adjustment and groups together several students who appear to still be struggling.  They will be working with Mr. Johnson for some of the remaining class time and will also be completing different practice problems than their classmates.  The next day, Mr. Johnson asks each student to view a solution to a completed practice problem that is already written in the board.  Each student must write a brief paragraph explaining if the solution is correct or not and evidence to support their reasoning.  Mr. Johnson walks around the room while students are writing their paragraphs.  Next, Mr. Johnson asks students to pair up and share their paragraphs with each other.  Finally, he asks several students to share their written responses aloud and the class collectively decides what the correct solution is to the problem.
Mr. Johnson teaches the area of a circle to round out the class period on Tuesday.  Rather than assigning practice problems from the text, he asks each student to find the area of a circle found in their home.  Each student will be asked to share their findings tomorrow in class.  On Wednesday, Mr. Johnson decides to administer a quiz that he knows will never land in the grade book.  He uses the quiz as an opportunity to provide written feedback to every student, but only after each student has once again self-assessed themselves in pencil against the standards.  Mr. Johnson writes comments by many of the students' solutions and then circles where each student is on a continuum of understanding for each standard.
Screen+Shot+2013-04-30+at+9.05.39+PM.png
Mr. Johnson asks students with relative strengths and weaknesses to pair up for seven minutes during class on Thursday.  Josie understood area of a triangle at a high level, but stunk it up on the area of a circle.  She'll be conferencing with Alex who didn't have a clue on the area of a triangle, but dominated the area of a circle.
Later in the week, all students complete another assessment, but this time it goes into the grade book.  Mr. Johnson feels pretty good about the assessment results, because he had the opportunity to see and hear students' thinking during class and was able to provide them with structured feedback through the ungraded quiz prior to the most recent assessment.   Reassessment opportunities are offered to students after the most recent assessment as well.

This fable is far from the ideal classroom, however I think it illustrates an aspect of standards-based grading that I believe deserves more attention in my own conversations with fellow educators: less grading and more feedback.

Read more…

Downloading Videos from YouTube

This doesn't directly relate to Assessment FOR Learning, but it will be helpful to teachers trying to bring different resources into the classroom. YouTube is full of wonderful video clips to use in a classroom. There are 2 problems with YouTube, though. 1. Some schools block it 2. Sometimes a slow network will make it difficult or impossible to watch a video The solution is to download the videos to your computer (this can be done at home if your school blocks YouTube) and then either show them directly from your computer and/or embed them into PowerPoints. Here is the easiest way I know to do this: 1. Go to http://youtubedownload.altervista.org 2. Follow the instructions to download and install the YouTube Downloader software 3. Open up YouTube Downloader once it is installed. Check the radio button next to "Download video from YouTube". You will see a bar for "Enter Video URL". 4. Go to YouTube and find a video you want. Copy the url and paste it into the "Enter Video URL" bar on YouTube Downloader window. (On YouTube you can find the url either in the url bar at the top of your browser or in the upper right-hand corner of the website just below the information about the video.) 5. Click Ok - the video will download and will save to the place you designate 6. It will be in a format that won't embed into a PowerPoint. On YouTube Downloader, click the radio button next to "Convert video (previously downloaded) from file". 7. Now you will see a bar labeled "Select video file". Click the box to the right of the bar and choose your file from where it is saved on the desktop. 8. In the "Convert to" pull down menu, choose the type of file you want. Windows Media Video (V.7 WMV) works best for PowerPoints. 9. Click Ok. This is a very simple process. In less than 10 minutes I downloaded, converted, and emailed 3 videos for a teacher in our school. Let me know if you have any questions about it.
Read more…

Blog Topics by Tags

Monthly Archives