I have often found that, in my music-theory and aural-skills classes, students begin and end a course in roughly the same place. For instance, a student who receives a "C" on his or her first exam often receives that same course grade. This phenomenon has perplexed me for some time; why, for instance, would a student who had trouble hearing intervals at the start of the semester experience the same degree of difficulty hearing chord progressions by the end? And would that student be any better at hearing intervals if tested on it at the course's conclusion? The greatest predictor for success is often the amount and quality of prior experience students have upon beginning their music-theoretical study, not the amount of effort they put in. Instead of lifting students out of their current position, instruction seems merely to help them tread water. Why is that? Are there any ways of breaking this cycle and enabling more students to succeed?

One solution education researchers have proposed to this problem is known as "mastery learning." While most known for his "taxonomy of learning" (Bloom et al. 1956), renowned educational psychologist Benjamin Bloom also introduced the concept of "learning for mastery," which he introduced as follows: "Most students (perhaps over 90 percent) can master what we have to teach them, and it is the task of instruction to find the means which will enable our students to master the subject under consideration. Our basic task is to determine what we mean by mastery of the subject and to search for the methods and materials which will enable the largest proportion of our students to attain such mastery" (1968, 1). Bloom was arguing against the prevailing instructional methods of the time, in which students were exposed to one sequence of direct instruction (e.g., lectures), expected to work independently towards understanding that instruction, and assessed based on what they were able to retain or synthesize within the allowed time. While instructional methods have changed a great deal since the late sixties, I expect that many music-theory courses do not require a level of achievement that Bloom might characterize as "mastery" (though a study of current assessment methods in music-theory classrooms would make for very informative future work).

To complicate matters even further, norm-referenced grading (Great Schools Partnership 2015)—colloquially known as "grading on a curve"—was in common use at the time of Bloom's writing, which meant that students were graded based on their relationships to their peers and not on a set of clearly defined standards such as used in criterion-referenced grading (Great Schools Partnership 2014). Norm-referenced grading assumes that human achievement is distributed in a way that approximates a normal distribution (or "bell curve"), and that by scaling grades according to such a distribution, learners may understand how they stand in relation to each other. Given the prevalence of such thinking, Bloom's comments on the "normal curve" and its relationship to instruction must have sounded radical:

There is nothing sacred about the normal curve. It is the distribution most appropriate to chance and random activity. Education is a purposeful activity and we seek to have the students learn what we have to teach. If we are effective in our instruction, the distribution of achievement should be very different from the normal curve. In fact, we may even insist that our educational efforts have been unsuccessful to the extent to which our distribution of achievement approximates the normal distribution" (1968, 2).

Additionally, Duker, Gawboy, Hughes, and Shaffer point out that "[c]riterion-referenced grading can also help create a supportive learning environment that encourages collaboration and peer instruction, as opposed to norm-referenced systems, which involve competition between students" (2015, 2.1).

While the notion that a student failing a course is evidence of the failure of instruction and not of the student is less radical today than it was in the past, I find mastery learning to be most radical in its optimism about what students can achieve. Bloom believed that most students could master a subject if given enough time and instruction tailored to their individual needs. A normal grade distribution reflects the fact that the time allotted and teaching methods used do not enable most students to reach high levels of achievement. Some students—those predisposed to whatever methods used and capable of learning within the given time frame—succeed, while others do not. In a subject like music theory, in which many concepts build sequentially on previous ones, learning deficits often compound such that, by the conclusion of a course, some students have little chance of succeeding. In order to enable more or all students to master course outcomes, instructional time and methods are two elements that mastery learning emphasizes.

Before discussing how mastery learning handles instructional time and methods, a central question must be addressed: what constitutes mastery in a subject? When can one be considered to have truly mastered something? In his practical guide to Implementing Mastery Learning, Thomas Guskey defines "[t]he standard for mastery [as] the level of performance on the assessment that indicates that the concepts from the unit have been learned well," which he advises instructors to equate with a score of 85% or above on that assessment (1997, 89). Other educational systems such as competency-based learning (Sturgis and Patrick 2010) and proficiency-based learning (Great Schools Partnership 2016) also require students to demonstrate a high level of achievement before allowing advancement, but define that achievement in terms of competency and proficiency, respectively. In moving away from the term "mastery," more recent systems reflect the feeling that it may be easier to assess whether a student has proven competent or proficient at something rather than become a "master" of that thing. To Bloom and Guskey, the distinction is unnecessary: if students perform well at the tasks an instructor has set for them, then why not call those levels of achievement "mastery"? As an educational goal, mastery certainly sets the highest and most optimistic standard.

I am sympathetic, however, to the suggestion that mastery might be a level reserved for something beyond what can be assessed using commonplace methods. In defining a "proficient" level at which students receive an "A" grade, Vicky Johnson reserves "mastery" for a higher level of achievement at which "students understand a concept to the degree that they can effectively teach others how to understand and apply the concept" (2015). Rather than requiring students to teach each other to demonstrate their mastery, I emphasize the creative application—especially through composition and improvisation—of theoretical concepts in my assessments in order to judge their mastery of those concepts. I assign a large creative project at the end of each semester to get at the higher level of "synthesis" found in Bloom's original taxonomy. In fact, a 2001 revision of Bloom's taxonomy elevated "creation" to its highest level in recognition of the capacity for creation to reveal synthesis (Anderson and Krathwohl 2001). Creative assessments also tend to be authentic, real-world tasks that prepare students for and engage them in the types of music making that they will likely encounter in their careers. Anna Gawboy (2013) similarly discusses the merits of authentic assessments and calls for making theory instruction more like applied instruction, a subject that often relies on a mastery-learning model. I believe that, through a combination of assessments—some authentic, some inauthentic, some creative, some rote—by the conclusion of my courses I can be sure that students have reached mastery of the materials that I have worked them through.

On the more practical side, instructors interested in adopting mastery learning must consider what exact level of achievement qualifies, for them, as mastery. The authors of a meta-analysis of the effectiveness of mastery learning programs discovered a very wide range of achievement levels among their included studies (Kulik, Kulik, and Bangert-Drowns 1990). While they discounted one study that set mastery at 56%, the remaining studies pinned it at various levels above 70%. They ultimately found that setting a bar of over 90%, among other factors, resulted in the greatest increase in achievement between the mastery-learning and control groups. Over time, I have moved away from grading student work according to a numerical, percentage-of-correct-responses system and towards a simple rubric that contains three categories: high pass, pass, and retake (or revise). The first step in constructing such a rubric consists of clearly defining, in simple language, what passing work looks like for every assessed task. Providing examples of previous students' work (with their permission, of course) is another very helpful way to communicate mastery standards. I also include descriptions of the high-pass and retake/revise levels in a three-category rubric, creating a grading structure similar to Jan Miyake's +/P/nP/system (2012). Example 1 provides a rubric that I give to my students for an improvisation over the harmonic progression to George Gershwin's "I Got Rhythm," a task that I described in detail in a previous issue of Engaging Students (Michaelsen 2016). I use this specific assessment task in the third semester of my integrated theory/aural-skills sequence.

Example 1. Mastery rubric for improvisation on "I Got Rhythm"

  1. High pass: Chord tones of each chord clearly emphasized. Melody moves smoothly from chord to chord with no interruptions. Chord progressions in A and B sections clearly outlined. Important chromatic pitches of secondary dominant chords emphasized.
  2. Pass: Chord tones often emphasized, but with moments of contradiction that still fit with the general scale (e.g., B♭ major over the I-vi-ii-V progression). Some repetition of learned patterns, but other moments of freer improvisation. Chromatic pitches of secondary dominant chords used but could be emphasized more clearly.
  3. Retake: Chord tones not emphasized with enough frequency. Mistakes in pitch selection that contradict chords and scales. Melody halting and lacking in fluency. Little to no emphasis on chromatic pitches. Melodic patterns extremely repetitive (e.g., similar patterns played over all I-vi-ii-V progressions).

As the description of a retake score suggests, should students not reach the high bar set for mastery, they do not move forward, but instead continue to work towards mastery and retake or revise the portions of an exam they did not pass. This grading structure expands the amount of instructional time students may receive in order to reach mastery and alleviates some of the test-taking anxiety students can feel; instead of an exam marking the end of a unit of instruction, it is merely the first opportunity for students to demonstrate their learning. Changing to a high pass/pass/retake grading structure presents at least two main sets of challenges: First, it requires me to consider more carefully the sorts of activities I include on an exam. Finding the time to schedule retakes of timed, in-class exams can be difficult, which has led me to focus more on creative tasks like composition and improvisation that best demonstrate mastery. Retaking an improvisation requires a few minutes after class or during an office hour and revising a composition can be done entirely outside of class. With an exam component like a transcription or dictation that is best administered in class, I offer additional follow-up attempts for students who do not pass on their first try. I permit my students to retake or revise exam components many times if needed, as long as they show progress with each opportunity.

The second set of challenges revolve around logistics and record keeping. There is an alluring simplicity to having an exam mark a firm conclusion to an instructional unit. In a mastery-learning model, each student tends to be in a different place regarding their demonstration of mastery. While I expect that most instructors keep track of their students' achievement levels, mastery learning makes awareness of these levels central to instructors' day-to-day concerns. I am fortunate to have classes capped at 20 students, which makes finding time for retakes much easier than for larger lectures. Flipped methods, such as those espoused by Duker, Gawboy, Hughes, and Shaffer (2015), can be a way to free up class time for the assessment of individual students' ability levels. Additionally, the standards-based grading method those authors discuss—in which students' grades are based on the continual assessment of specific standards and goals—could be combined with a mastery learning model to help deal with large groups. But it is entirely possible that mastery learning is simply unfeasible in lecture situations unless courses are structured with smaller sections, perhaps taught by graduate-student assistants, to allow for more individualized attention.

No matter how many times they are allowed to retake or revise exam components, a few students each semester fail to reach mastery. If I have given those students many retake opportunities and not found improvement, I will begin to have a discussion with them about their status in the course. My music department offers all four semesters of our integrated theory and aural-skills sequence each semester, so I counsel students in this position to withdraw and retake the course the next semester. Given that we hold all students to the same high standard, not achieving that standard is simply evidence that those students need more time to reach mastery. While this can be disappointing, it is a far better outcome than having them continue forward with severe knowledge or skill deficits.

By relaxing the time constraints imposed on learning by exam deadlines, mastery learning gives students the time they need to process course material. Exams are not the only type of assessment that occur in most classrooms, of course, nor are they the only type of assessment essential to adopting mastery learning. Exams are an example of summative assessments, used to determine students' mastery of course goals and give them course grades. Equally important are formative assessments, which gauge student learning as it is in the process of forming. Two archetypes of formative assessment are homework assignments and quizzes. As Guskey explains, "[a] formative assessment's most important characteristic is that it provide[s] students with precise and immediate feedback on their learning progress. This feedback can be used to remedy learning difficulties and serve as a guide for the correction of errors or misunderstandings that developed during the original instruction. For this reason, the scores that students attain on formative assessments often are not counted in determining their grade" (1997, 56). While I include formative assessments in my students' course grades, I assign them a low percentage (20%) and grade these assessments for completion only, not for correctness (i.e., a student receives full credit for completing it with no consideration for its quality). In this way, I incentivize and reward students for completing these assessments without punishing them for making mistakes—which should be expected—while in the process of learning.

After formative and summative assessments have been given, mastery learning requires instructors to consider different teaching methods to intervene with students that have not reached mastery. I expect that many of the interventions one might consider are in common use: asking a student to reread a portion of the course text, requesting revision of a homework assignment, meeting one-on-one with a student after class or during office hours, directing a student to a tutor, or creating peer-study groups. But instructors should not continue to force a square peg into a round hole; if a student has difficulty understanding a particular way of presenting a concept, other instructional avenues might be needed. In music, we are fortunate to have a number of methods to get at that concept at our disposal: we can ask a student to write about, explain, sing, perform, transcribe, dictate, improvise or compose, or dance to many musical phenomena. Indeed, many of the contributions to Engaging Students and Journal of Music Theory Pedagogy provide examples of such alternative strategies. Rather than emphasizing any single teaching method, mastery learning calls on instructors to rely on as many of them as possible and to personalize their use when confronted with a student who struggles to achieve mastery.

One of the greatest strengths of mastery learning is that it is compatible with most teaching approaches and course content. Indeed, it encourages instructors to adopt as many approaches as possible in order to reach every student. Efforts to reform theory curricula often focus on expanded repertoires, new instructional tasks, and innovative teaching methods, elements typically found in the second half of a course syllabus. I like to think of mastery learning as impacting the first half of the syllabus, the policies surrounding grading and assessment that often go unexamined. While setting mastery as the ultimate goal for all students might seem unattainable, it is the result that most theory instructors wish they could help their students achieve. Instead of writing off mastery-for-all as a pie-in-the-sky dream, mastery learning offers radical optimism about our abilities to teach all students.

Bibliography

Return to Top of Page