This was a fascinating post I tripped upon at LiveJournal that I wanted to share. celandine13: Errors vs. Bugs and the End of Stupidity:
In fact, wrong notes always have a cause. An immediate physical cause. Just before you play a wrong note, your fingers were in a position that made that wrong note inevitable. Fixing wrong notes isn’t about “practicing harder” but about trying to unkink those systematically error-causing fingerings and hand motions. That’s where the “schizophrenia” comes in: pretending you can move your fingers with your mind is a kind of mindfulness meditation that can make it easier to unlearn the calcified patterns of movement that cause mistakes.
Remembering that experience, I realized that we really tend to think about mistakes wrong, in the context of music performance but also in the context of academic performance.
A common mental model for performance is what I’ll call the “error model.” In the error model, a person’s performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.
But we could also consider the “bug model” of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you’ll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can’t be quantified along a single axis as less or more severe. A bug gets everything that it affects wrong.