Standard Based Grading Research Paper

Matt Townsley

Here in Iowa, competency-based education is gaining traction at the state and grassroots level. In fact, the Iowa Department of Education has launched a multi-year CBE collaborative. Needless to say, it’s an exciting time to be an educator in the Hawkeye State!

Meanwhile, a core group of Iowa schools have started to implement a standards-based grading philosophy in middle and high schools. Because of these two movements in our state, standards-based grading and competency-based education are often times incorrectly presented as synonymous practices. As a member of Iowa’s CBE task force and through my work as a district administrator in a system that has embraced standards-based grading K-12, I’ve been in a position to think about and discuss these two topics extensively. When area schools hear about our grading and reporting practices, we are often asked how our system relates to those working towards competency-based educational models. While many of the ideas overlap, I felt compelled to tease out these two education terms in order to honor their similarities and differences.

What is standards-based grading? 

Standards-based grading “involves measuring students’ proficiency on well-defined course objectives.” (Tomlinson & McTighe, 2006). (Note: Standards-based reporting involves reporting these course objectives rather than letter grades at the end of each grading/reporting period.)

The visual below compares traditional grading with standards-based grading practices.

Traditional Grading SystemStandards-Based Grading System
1. Based on assessment methods (quizzes, tests, homework, projects, etc.). One grade/entry is given per assessment.1. Based on learning goals and performance standards. One grade/entry is given per learning goal.
2. Assessments are based on a percentage system. Criteria for success may be unclear.2. Standards are criterion or proficiency-based. Criteria and targets are made available to students ahead of time.
3. Use an uncertain mix of assessment, achievement, effort, and behavior to determine the final grade. May use late penalties and extra credit.3. Measures achievement only OR separates achievement from effort/behavior. No penalties or extra credit given.
4. Everything goes in the grade book – regardless of purpose.4. Selected assessments (tests, quizzes, projects, etc.) are used for grading purposes.
5. Include every score, regardless of when it was collected. Assessments record the average – not the best – work.5. Emphasize the most recent evidence of learning when grading.

Adapted from O’Connor K (2002).  How to Grade for Learning: Linking grades to standards (2nd ed.). Thousand Oaks, CA: Corwin Press.

In our district, secondary teachers are required to abide by the following grading guidelines:

  1. Entries in the grade book that count towards the final grade will be limited to course or grade level standards.**
  2. Extra credit will not be given at any time.
  3. Students will be allowed multiple opportunities to demonstrate their understanding of classroom standards in various ways. Retakes and revisions will be allowed.
  4. Teachers will determine grade book entries by considering multiple points of data emphasizing the most recent data and provide evidence to support their determination.
  5. Students will be provided multiple opportunities to practice standards independently through homework or other class work. Practice assignments and activities will be consistent with classroom standards for the purpose of providing feedback. Practice assignments, including homework, will not be included as part of the final grade.

** Exceptions will be made for midterm and/or final summative assessments. These assessments, limited to no more than one per nine-week period, may be reported as a whole in the grade book.

Parents and teachers have commented positively after watching a short five-minute video explaining standards-based grading that was used to convey these ideas prior to our implementation dating back to the 2012-13 school year.

What is competency-based education? 

Under a competency-based education system, “learners advance through content or earn credit based on demonstration of proficiency of competencies” rather than seat time. (Source: Iowa Department of Education CBE Pathways.)

With so many definitions of CBE available, I settled on principles of competency-based education from Iowa’s Guidelines for PK-12 Competency-Based Pathways as a reputable framework, because they were adapted from International Association for K-12 Online Learning (iNACOL). I’ve included the CBE Pathways principles and their descriptors below.

A. Students Advance upon Mastery 

  • Students advance to higher-level work upon demonstration of mastery of standards rather than according to age or seat time.
  • Students are evaluated on performance and application.
  • Students will master standards and earn credit or advance in content at their own pace.
  • They will work through some standards more rapidly while taking more time to ensure mastery on others.

B. Explicit and Measurable Learning Objectives that Empower Students

  • The relationship between student and teacher is fundamentally changed as students gain understanding of what working with standards requires and take ownership of learning, and as teachers provide the appropriate supports for learning.
  • The unit of learning becomes modular.
  • Learning expands beyond the classroom.

C. Assessment Is Meaningful and a Positive Learning Experience for Students

  • Schools embrace a strong emphasis on formative assessment as the unit of learning becomes modular.
  • Teachers collaborate to develop understanding of what is an adequate demonstration of proficiency.
  • Teachers assess skills or concepts in multiple contexts and multiple ways.
  • Attention is on student learning, not student grades.
  • Summative assessments are adaptive and timely.
  • Assessment rubrics are explicit in what students must be able to know and do to progress to the next level of study.
  • Examples of student work that demonstrate skills development throughout a learning continuum help students understand their own progress.

D. Rapid, Differentiated Support for Students Who Fall Behind or Become Disengaged 

  • Educator capacity, and students’ own capacity to seek out help, will be enhanced by technology-enabled solutions that incorporate predictive analytic tools.
  • Pacing matters. Although students will progress at their own speeds, students who are proceeding more slowly will need more help, and educators must provide high-quality interventions.

E.  Learning Outcomes Emphasize Application and Creation of Knowledge

  • Competencies will include the standards, concepts, and skills of the Iowa Core as well as the universal constructs (creativity, complex communication, collaboration, critical thinking, flexibility and adaptability, and productivity and accountability).
  • Lifelong learning skills are designed around students needs, life experiences, and the skills needed for them to be ready for college, career, and citizenry.
  • Expanded learning opportunities are created as opportunities for students to develop and apply skills as they are earning credit.

What are some ways in which standards-based grading and competency-based education are similar?

In both systems…

  • Students learn specific standards or competencies based on a pre-determined rubric.
  • Students take more ownership of their learning, because it (learning) is communicated rather than “Project 3” or “Worksheet 4-2.”
  • Using assessments in formative ways is the norm rather than exception.

These two systems may be similar in some contexts. For example, learning outcomes could emphasize application and creation of knowledge in a classroom that uses a standards-based grading philosophy, but, by definition in a standards-based system, this may or may not be the case. Similarly, experiences could be designed around students’ needs and life experiences in a standards-based grading classroom, but it is not necessarily the norm.

What are some ways in which standards-based grading and competency-based education are different?

In a competency-based system…

  • Students advance to higher-level work and can earn credit at their own pace. (In a building, district, or classroom using a standards-based grading philosophy, this is not necessarily the case. Students are likely required to complete x number of hours of seat time in order to earn credit for the course.)
  • Learning expands beyond the classroom. This may or may not take place in a standards-based grading philosophy. For example, in a competency-based system, a student who learns a lot about woodworking over the summer may earn credit when he or she returns to school the next year. Similarly, students are encouraged to learn outside the classroom so that they can demonstrate competencies at their own, rapid rate.
  • Teachers assess skills or concepts in multiple contexts and multiple ways. (This may or may not be the case in a standards-based grading classroom; however, it is non-negotiable in competency-based education.)

Summary

A standards-based grading (SBG) philosophy is similar, but not synonymous with, the idea of competency-based education (CBE). SBG is a way of thinking about grading and assessment that more clearly communicates with parents and students how well learners currently understand the course objectives/standards/competencies. CBE is a system in which students move from one level of learning to the next based on their understanding of pre-determined competencies without regard to seat time, days, or hours. A competency-based system may utilize a standards-based report card to communicate student learning; however, the two educational terms are not, by definition, the same.

About the Author

Matt Townsley joined the Solon Community School District (Solon, IA) central office administrative team in 2010. Prior to his current role, he taught high school math for six years in the same district. Currently, Matt is pursuing a doctorate in school improvement at the University of West Georgia. One of his articles, "Redesigning Grading—Districtwide" was published in the December 2013 issue of Educational Leadership. He regularly presents at conferences and leads professional development on the topics of formative assessment and standards-based grading. He can be reached at @mctownsley on Twitter.

Tips From Dr. Marzano

Formative Assessment & Standards-Based Grading

 

 

 

In lieu of formative assessments and summative assessments, the terms formative scores and summative scores can be used to describe how teachers employ assessments in the classroom.

Assessments have many forms and many uses, two of which are to provide formative and summative scores to students. The word score, rather than assessment, in the terms formative score and summative score relays that the essential difference between the two is not their format, but their purpose in the assessment process. Teachers record and track formative scores from individual assessments as indicators of students’ knowledge or skill at particular moments in time. In comparison, summative scores are final scores based on the pattern of students’ responses over time. Teachers may base each score on a number of common assessment forms, such as obtrusive, unobtrusive, and student-generated assessments. However, formative scores are used for tracking progress, while summative scores express students’ mastery of a topic, generally at the end of a unit (pp. 27¬–28).

Formative scores should never be averaged to arrive at a student’s summative score.

Averaging may seem like a logical way to determine a student’s cumulative score at the end of a unit; however, this method is antithetical to the key principles of formative assessment. When a teacher tracks a student’s formative scores for one unit, the student’s scores will generally show a progression of learning. This means that a student’s scores will likely be lower at the beginning of a unit than at the end. Therefore, if a teacher averages a student’s formative scores to calculate a summative score, the resulting summative score would be lower than the student’s actual current level of skill, as it would give early scores the same weight as later scores. To avoid inaccurate summative scores, teachers can give more weight to scores at the end of the unit, which generally best reflect students’ level of mastery (p. 28).

A summative score is based on formative scores collected throughout a unit rather than a single final assessment.

While final cumulative assessments can be useful in gathering data about students’ current knowledge and skill in a topic area, every assessment contains error, which necessarily limits the definitiveness of any one assessment. It is essential that students’ summative scores are based on multiple sources of data to lessen the inherent error in all test forms. In addition, a student’s last formative assessment score is not an appropriate summative score, as it may not necessarily reflect the student’s current level of knowledge and skill. Teachers can evaluate individual students’ or the class’s learning progressions in tandem with any final assessment scores to determine the most representative summative scores (p. 29; see pp. 81¬–98 for a more detailed discussion).

Short oral responses are a great opportunity to provide instructional feedback.

Short oral responses are a great informal way to ensure that students grasp classroom content. Teachers pose questions and call on students to answer them, creating a low-stakes assessment opportunity and allowing teachers to correct any errors in understanding (that is, give instructional feedback). When students respond, it is important to ask students why they think their answer is correct, rather than simply judging the answer to be right or wrong and moving on. These opportunities for discussion and explanation give both teachers and students the chance to see what is clear and not clear about content in a low-stress environment, and teachers gain the opportunity to clarify any issues before moving on to more advanced content (p. 70).

Formal oral reports can be used in tandem with proficiency scales to serve as obtrusive formative assessments.

Oral reports, a classic formative assessment, can be likened to written essays that students must develop multiple drafts of before arriving at a final product, though with the added step of delivering their final product orally. In the same way that written essays can be scored using a proficiency scale, formal oral reports can also be scored using a proficiency scale. To do this, teachers should clearly specify the content that students should address in their presentations and understand the proficiency scale that will be used to score the oral report. The proficiency scale should identify content at the basic (2.0), proficient (3.0), and advanced (4.0) levels (pp. 70–71).

Teachers using probing discussions should tailor their follow-up questions to proficiency scales in order to ask the most useful questions for assessing the understanding of a student.

In a probing discussion, a teacher “meets one-on-one with a particular student and asks him or her to explain or demonstrate something.” In these situations, after or as the student gives his or her response, the teacher asks questions about that student’s responses. These follow-up questions are designed to give the teacher a clear idea of what a student does or does not know. In designing questions for probing discussions, teachers should use a proficiency scale for guidance, creating questions that align to the 2.0, 3.0, and 4.0 levels of the scale. As a student answers each question, the teacher evaluates the response as correct, incorrect, or partially correct and uses the student’s pattern of responses to assign them a score. Once finished with the probing discussion, a teacher can use the results as a formal assessment by writing down the score in a grade book or as an opportunity for instructional feedback by correcting any misconceptions a student may have about material (p. 71).

Most assessments in today’s classrooms are based on a 100-point scale. The improper use of this scale can lead to incorrect student achievement scores.

The range of scores between classrooms is one source of error associated with the 100-point scale. This type of scale provides little to no reflection of the difficulty level of each assessment. Weighting items differently from assessment to assessment and the uneven level of difficulty is akin to changing the scale that is being used from one assessment to the next. Tracking student achievement over time using the 100-point scale can be tremendously difficult due to the wide range of scoring provided (p. 41).

A well-written scale can be thought of as an applied version of learning progression.

A scale should make it easy for teachers to design and score assessments. To be most useful, scales should be written in student-friendly language. The teachers should introduce each scale to the students and explain what is meant by the content with each score value. Below is an example of a generic scale (pp. 44–45).

Table 3.5 Generic Form of the Scale

Score 4.0

More complex content

Score 3.0

Target learning goal

Score 2.0

Simpler content

Score 1.0

With help, partial success at a score of 2.0 content or higher

Score 0.0

Even with help, no success

 

Well-constructed scales are critical to scoring demonstrative and unobtrusive observations.

Unobtrusive assessments are most easily applied to demonstrations since demonstrating skills usually involves doing something that is observable. Mental procedures are more difficult, however, and typically a teacher would need to ask probing questions of the student to render a discussion. This discussion would be key to assessing the skill level of student achievement (pp. 74–75).

Three types of assessments can and should be used in a classroom for a comprehensive system of formative assessment: obtrusive assessments, unobtrusive assessments, and student-generated assessments.

Student-generated assessments are probably the most underutilized form of classroom assessment. As the name implies, a defining feature of student-generated assessments is that students generate ideas about the manner in which they will demonstrate their current status on a given topic. To do so, they might use any of the types of obtrusive assessments discussed in the preceding text (pp. 23–24).

For example, one student might say that she will provide oral answers to any of the 20 questions in the back of chapter 3 of the science textbook to demonstrate her knowledge of the topic of habitats. Another student might propose that he design and explain a model of the cell membrane to demonstrate his knowledge of the topic (p. 25).

When tracking student progress using formative assessment, a 0 should not be used for a missing or incomplete assignment.

A score of 0 is never recorded in the gradebook if a student has missed an assessment or has not completed an assignment. Many assessment researchers and theorists have addressed this issue in some depth (Reeves, 2004) (Guskey & Bailey, 2001). Briefly, no score should be entered into a gradebook that is not an estimate of a student's knowledge status for a particular topic at a particular point in time (p. 85).

Student-friendly scales should have examples of what it would look like to provide a correct answer for the score of 2.0, 3.0, and 4.0 content.

Scales that have been rewritten in student-friendly language should provide students with clear guidance as to what it would look like to demonstrate score 2.0, 3.0, and 4.0 competence (see Table 3.7 for an example of a student-friendly scale). It is much more likely that students have really considered and come to understand the goals when teachers give the class the opportunity to rewrite the scale(s) in their own words (pp. 46, 141).

One fact that must be kept in mind in any discussion of assessment—formative or otherwise—is that all assessments are imprecise to one degree or another.

One fact that must be kept in mind in any discussion of assessment—formative or otherwise—is that all assessments are imprecise to one degree or another. This is explicit in a fundamental equation of classical test theory that can be represented as follows:

Observed score = true score + error score (p. 13)

0 thoughts on “Standard Based Grading Research Paper

Leave a Reply

Your email address will not be published. Required fields are marked *