A quick visit to www.greatschools.org can be quite instructive into the world of evaluation – but not for the reasons one might expect. The website essentially aggregates the publicly available data on almost any school in the United States and then assigns the institution a score between one and ten. Standardized testing provides much of the information used to derive the ranking, so the results naturally mirror the flaws of the process. But if the data-generation processes are least equally flawed within a state or region, the rankings can still be indicative.
However, what is most interesting about the site is the opportunity it affords to juxtapose its positivist approach with the comments and rankings of the parents of students. A decidedly unscientific and unrepresentative sampling indicates there’s no firm correlation, but I was struck by the overwhelmingly positive commentary and 5-star public rating on a school currently scored by the site at an abysmal 2 out of 10. Various possibilities crossed my mind.
1) The data-based ranking is entirely wrong. (Possible but improbable)
2) The school is manipulating the site mechanisms to improve its image (possible, but not immediately verifiable)
3) The parents are responding from the context of a multiplicity of biases (quite probable)
4) A combination of all of the above (also quite probable)
Beyond the fact that humans are just inherently bad at examining data objectively, if a parent is compelled by civic statute and financial circumstance to send his or her offspring to a public institution, would that parent acknowledge in any open forum the fact that the children are receiving an awful education, thus condemning the children as much as the school?
As proud father of the two most beautiful and intelligent girls in the world, I’m quite convinced that parents are least able to respond objectively about their children (they tend to disagree with my first point) so I grow concerned when I read evaluation after evaluation of Sistema programs which rely heavily on parental input. The extremely heavily positive weighting of the majority of responses strikes me as suspect. Are parents trying to safeguard a free-babysitting service, regardless of quality; attempting to please the assessor; or avoid characterizing their own progeny as developmentally slow? Consider the context: would anyone ever say “No, my child has no more confidence at all!” in response to the typical survey question? The obvious point to be made is that it’s not the answer that is at fault, or even the respondent, it’s the question.
But then, what’s the right question? Or should we ask at all? It’s easy to say evaluation is the major challenge facing Sistema programs right now (I do it all the time), but extraordinarily difficult to make sense of the nature and scope of that issue – which is why the paper presented by four members of the outgoing class of Sistema Fellows ranks in my mind as a document I would actually describe as one of the very few must-reads for anyone genuinely serious about having impact in this sphere of endeavour (my own output is not included on that shortlist, for the record.) Aside from being impeccably written and devoid of the usual insightless rhetoric, it is one of very few papers to connect its proposals and frameworks to a broad base of scientific research while clarifying the major points under consideration. It is intelligent, readable and most of all, useful. Congratulations to Andrea, Carlos, Sara and Elaine for having made a substantive contribution to the literature even beyond the narrow Sistema community. I’d hire them all, if I could.
As we’re looking to refine and focus the new models or frameworks, there are others attempting to do the same within the existing ones. I was made aware of one such program recently, the Opportunity Music Project in New York City. With its focus on private instruction it’s not technically Sistema, but as I frequently am compelled to remind people, much great work in music education is done beyond Sistema programs. I was particularly stuck by the founder’s unique articulation of the idea of parents and teachers as “co-collaborators in the success of the student.” They’re running an Indiegogo campaign right now to support their work, and I would encourage you to give. What persuaded me to pull out my credit card was this video.
The content, the messaging was largely consistent with our industry as a whole, but the tipping point came at the three minute mark. The founder, Jessica Garand, unvaryingly articulate and sincere up to that point, sounds like another person entirely as she outlines her vision for the program. The excitement, dedication and clarity of purpose in her voice and in her message are all unmistakable. If you can skip one Starbucks this week and send $5 their way, please do. Not a positivist response on my part – but since when were our decisions really data-driven?
4 thoughts on “The Data, the Data-driven, and the Driven”
Glenn Thomas • Jonathan- your emphasis on the importance of the paper written by the Sistema Fellows Research Group inspired me to spend time reading it. A great piece indeed!
So far I’m 13 pages in, and realizing just how much work there is just understanding this area, not to mention planning and executing it. One impression is just how much this will cost. Sistema programs are already very expensive. Ken MacLeod recently quoted over $3MM to serve 1500 students.
In my years of bringing sustainability into business, sequence matters. What you do first is driven by its ability to create revenue. For example, in making a manufacturing company more environmentally sustainable, you first find opportunity to reduce wasted resources. The effect is significant in reducing carbon footprint, but also in reducing cost and thereby increasing revenue. The increased revenue pays for the next effort and so on. If you hope to get it all done, you must do it in the right order.
Prioritizing research efforts on those with the highest financial return gets you to the end of your list with a stronger balance sheet than you started with.
Believing Sistema assessment and evaluation must be adopted sequentially, strong consideration should be given to seeking and measuring those outcomes which will produce the best result and revenue. This is why we have finance majors all around us. They can tell us the right sequence to take so we don’t go broke in the middle of the movement.
Knowing the internal and external value of research efforts and prioritizing accordingly is critical to achieving maximum success.
For more information on the business sustainability approach I’m referring to is noted in this video, starting at about the 3 minute mark: http://youtu.be/ybyI-rRJ3ss
I agree with you entirely. I’d like to highlight something to which you alluded – and something of critical importance, I think – the theory of action that underlies the sequencing. Every step should be tied to a specific expected outcome at every level, be it that of operations or pedagogy, or the feedback circuit fails, and any possibility of intelligent sequencing fails with it. Your example of the reduced carbon footprint being tied to reduced costs demonstrates the need for a testable hypothesis: if the reduced footprint fails to reduce costs, then the underlying theory of action needs to be modified, or different ways to achieve the objective have to be identified.
With out even reading this report I have one question. What is the time frame of sustainable? I’m starting with 4 year olds on the average. That means I won’t get the conclusive results until about 14-16 years later.
Reread Eric’s recent piece as well.
What are some ways that we can create assesment tools that are designed to engage both parents and students in ways which empowers their agency? That is to say assessment systems that inspire students to set goals which they are motviated to own. I would glady like to blow holes in the assumption that this is meerly a movement to colonize or gentrify the underserved with classical music.
You’ve asked multiple questions here, all valid. I would say that we need to evaluate in multiple timeframes: identify those characteristics that can be measured short or mid term, and report on them frequently, while continuing to monitor longer-term benchmarks as well.
Right now our evaluation tools are a direct response to our mode of teaching: unidirectional data transmission . If the teaching we propose is in fact different, it will automatically suggest different modes of measuring.