In my previous entry I asserted that the state of Mississippi had removed fractions from its curriculum, and I offered a link for reference purposes. As anybody who clicked through and read the supporting web page must have realized, the statement was and remains patently false: despite coming from the generally reputable myth-busting site Snopes.com, the news article in the link becomes progressively more ridiculous, and is filed under the site’s “The Repository of Lost Legends” subsection. (Hats off to Eric Booth, the only person who contacted me about it.)
As a gesture it will certainly be misinterpreted, but the intention was never to troll my readership, only prove a point: it is a stunning indictment of the American education system that such a tale has any plausibility at all. And fact of the matter, in a nation where an Indiana state representative tried to legislate the value of pi as a rational number in 1897 (this is true); where the scientific validity of evolution is still hotly debated (yes, true); and where some educational activists do, really and honestly, advocate removing fractions from the curriculum, (read the article, it’s quite interesting) the idea does not strain credulity.
Case in point: at the Alberta Music Education Conference in Red Deer last weekend, I made reference to a development in Florida earlier this year in which only 27% (3/11ths) of fourth graders achieved the minimum passing grade of 4 out of 6 on the state standardized writing test. When I asked the delegates how they thought the state responded, the answer was never “by reinvesting in education” or “calling a public inquiry.” No, the response was immediate and instinctive: “they changed the passing grade.”
And that’s exactly what happened.
The Florida State School Board argued that given the passing rate of 81% (9/11ths) the previous year, the magnitude of the drop in 2012 reflected a deficiency in the test, rather than the teaching. Even if this explanation were accepted at face value, the issue remains extremely concerning – and it’s not limited to Florida. Standardized testing is controversial enough conceptually and philosophically, but now the purportedly straightforward mechanics have proven to be highly fallible as well. Yes, some form of measurement is essential, but what possible purpose could the act of measuring serve when conducted with a broken yardstick?
If you haven’t read Marshall Marcus’ excellent summary of the complexities but overriding necessity for evaluation in music education, you should. For my part, I agree with him entirely, having made some of the same points at different times in the past. But I can’t help shake the suspicion that the profession of music education, long having escaped the scourge of involuntary standardized testing, is now willfully and deliberately accelerating towards it. As an industry we were never without in voluntary form: the audition is the most prevalent example, but the competitive festival phenomenon is merely another incarnation, as are practical music examinations such as those offered by the Royal Conservatory of Music. The testing, such that it is, is undertaken for precisely the same purposes as well: to evaluate the current technical aptitude of the student (and by warped extension, the quality of the teacher), and to differentiate for the purposes of advocacy or self-promotion, primarily at the micro level.
The universal truth of a test, regardless of discipline, is that it can tell you the current level of competence of the taker within the narrow confines or scope permitted by the examination, but gives zero indication as to his or her potential. Testing twice, thrice or more isn’t the solution either, because Sistema is, at its best, a human developmental program. Comparison is essential, but comparison to what or whom? A collective baseline, a lowest common human denominator, or worse, the highest performers?
As a conductor, measured against Gustavo Dudamel, I am an abject failure. As a writer, measured against Henry James (one of my favourites) I am hopelessly inadequate. But I elect not to make these comparisons meaningful or relevant to me. I choose to be measured first against my wife’s expectations for a husband, thereafter my daughters’ expectations for a father, and lastly, in a fickle industry in which fame, financial success and excellence are not synonymous, by whether I hold the respect of those I respect myself. I may just have revealed myself to be professionally or artistically unambitious, but I still have lofty goals in areas of endeavour that are important to me, and I intend to achieve them. I measure myself primarily against my own perception of potential, I have also permitted selected others to define some of the parameters where our objectives overlap, and I pair their expectations with my own aspirations for internal guidance.
Measurement by any standards except those we accept is an imposition of values. Evaluation may be inevitable, but it is also inevitably incomplete, if not inaccurate, if not unjust. The idea may be utterly impractical and unworkable in reality, but allowing youth just once in their life the chance to choose how they will be assessed, to determine for themselves the standard to which each will be held individually, may be the most motivating, most empowering, most potential-defining moment they may ever have. How we choose to be evaluated may reveal as much, if not more, than any test ever could.
One thought on “Making the grade”
Hi Jonathan. In some workshops, I lead an activity in which I ask people to recall a time when they learned something hard, over time, something they really cared about. And then I have them reflect on that learning journey it in several ways, asking them to answer questions like: How did you know you were getting better? What would have helped you get better faster? How would someone who went through the process closely with you have known you were getting better? What would the ideal kinds of feedback have been for you? A conversation about assessment and evaluation inevitably ensues, and as you propose in your last paragraph above, the learner’s selection of who gives feedback, and how, and when, matters a lot. I have a theory that when people are intrinsically invested in a task, they WANT to know how they are doing, they are hungry for assessment and evaluation–of the right kind. Experienced artists know what kind of assessment and evaluation helps them, and often adhere to it, fiercely; we would do well to help learners come to know what works for them too. I think your recommendation above is wise, Jonathan, to include the learners in the explorations of what kinds of assessment and evaluation matter and should be invested in by programs, teachers and learners together.