The Joseph Haydn Competition in Vienna came to an end on Wednesday with a prizewinners’ concert and presentation of prizes by the sponsors and jury. Afterwards, there was a reception hosted by the university. Immediately I was approached by someone with a role in scientific research and evaluation. He wanted to know whether, before the competition began, we the jury had agreed on a list of the exact criteria we were looking for, and the marks to be given for each.
When I said we hadn’t, his face fell. ‘You hadn’t?!’ he exclaimed. Then I tried to explain that the jury was indeed looking for a whole range of things, from technical mastery through stylistic understanding to persuasive artistry, but that these things cannot be allocated points in isolation from one another. I’m sure we’d all been involved in attempts to impose ‘scientific’ marking systems, but the most important elements of artistic impact remain stubbornly outside ‘measurement’. How can you give exact points for the amount that something moves you or illuminates something new? Many musicians are afraid of the currently fashionable trend for ‘measuring’ everything, because this leads to a situation in which only those things which can be measured will be included in the criteria.
My companion sighed and said that every musician, actor and artist with whom he had ever discussed the matter had said the same thing. It was frustrating, he said, for those trying to create scientific methods of evaluation that could be used across the university, and between universities. I can see this, of course, but I can’t see any way around it except not having competitions or exams in the arts at all. But that would have all kinds of other effects on artists’ possibilities, and many of them would struggle to come to public notice. Most would be reluctant to swap subjective assessment for a scientific system which prioritises the demonstrably ‘perfect’, because we all know how inadequate that would be.
I once took part in a training day for a system of music exams which, instead of asking the examiner for a single mark out of 25 for each piece of music, required the examiner to break down that mark into, say, 5 marks for technical skill, 5 for historical awareness, 5 for structural understanding of the music, 5 for artistic imagination, and 5 for performance and communication skills. It was a total nightmare. In attempting to comply with the marking scheme, we quickly discovered that these things shade into one another, and sometimes cancel one another out. It seemed that the old system, of coming up with an ‘unscientific’ but intuitive mark of, say, 20 out of 25 was probably as meaningful.
‘You’re going to tell me that it’s not quantity, but quality that matters, aren’t you?’ said my companion. ‘Yes.’ ‘But please tell me, how is quality defined?’ ‘I don’t know how to define it, but you know it when you hear it’, I said. ‘You know it when you hear it’, he echoed – ‘yes, every musician I’ve asked has told me that!’