fredag 10 juni 2011

On the fallible nature of tests and testing

My previous blog posts/rants about top-down vs bottom-up approaches to improve education and the paradox of planning is here complemented with some thought-provoking quotes about tests from Gerald Weinberg's 1971 book "The psychology of computer programming". I have the 1998 Silver Anniversary edition, the quotes below are harvested from pages 154-155. I have commented these quotes so as to make the connections to issues raised in the preceding blog posts more explicit.

"A number of firms have used the Strong test for selecting programmers [...] making a judgement that a programmer is "like" a mathematician, engineer, writer, or what have you. Since the basis for these judgements is pure speculation, such selection procedures could have been equaled by throwing dice. Throwing dice, however, does not have the right sound to it. A personnel manager can say, "We use the Strong Vocational Interest Blank to help us select programmers." This will certainly impress his manger more than saying, "We throw dice to help us select programmers."

If you don't really really know what you are looking for or what you are measuring, throwing dice might produce comparable results. Investment advice by monkeys throwing darts have been proved to often (around 50% of the time) be just as good as the advice of the average "qualified" investment managers. If you really really don't know what you are evaluating (or more generally, doing), throwing dice might be an attractive low-effort low-cost alternative). :-)

"Even assuming that the profiles are available [...] do they really reflect what we want? After all, these profiles are obtained by testing people already in the profession, not the people we would necessarily want to be in it if we knew what we wanted [my emphasis]. In old, established, and stable professions, it may be valid to assume that people in the profession are, by and large, the ones who should be in it - even though they might have been steered there by [...] a self-fulfilling prophecy"

Think less of "people" and "professions" and more about "educational programs" and you notice the conserving power inherent in the preoccupation of evaluating what is, and what has been. How do you develop a world-class innovative university education? By evaluating what has worked fine before/elsewhere/up to now, or by taking stock of the future and setting out in a new direction - even, or especially if that turns out to be in another direction compared to merely extrapolating from the past? I presume "best standards" approaches are fine if you have your goals set for mediocracy or slightly-above-average, but in order to do better than that you really do need to think for yourself (yourselves) and look forward more often than you (anxiously) look backwards over your shoulder.

"Essentially all psychological tests [...] assume that the psychologists who made the tests are smarter than the people who take them. Indeed [...] people who are attracted to psychological testing as a profession [...probably...] hold themselves to be smarter than other people. Perhaps it could not be otherwise, for how would they get evidence to the contrary? [...] In a way, a personality test is an intelligence test - a matching of wits with the person who made the test."

If a monkey in the jungle stumbled upon an iPad or or some such high-tech gadget, he would probably think it is a very stupid, or silly object - and how could it be otherwise? Do the people who create a "test" such as the Education Assessment Exercise, as well as those who buy in and carry it out, as well as the those who analyze the results and those who act on these results "by necessity" think that they are smarter than the people who "get the short end of the stick" and whose only role is to merely provide them with information? (Yep! It is an unequal relationship.) Will they "by necessity" think that their conclusions about "what is to be done" is of a higher caliber than the opinions of the flesh-and-blood university teachers who by filling out these tedious forms provide them with truthful (?) information on which they are to act?

Instead of filling out forms and providing others with information (and putting our hope in wise decisions made by some nebulous "them" residing elsewhere), a better way to improve an education might be to create the space-time for those who actually teach to regularly meet and discuss issues that they find necessary, interesting or problematic in their day-to-day, month-to-month or year-to-year activities! To "create time" means (for example) to protect the time of the faculty at a university from incursions by others - including centrally initiated requests for this-and-that or (some) students' extravagant expectations of getting personal answers by email within a day to any and every question they might pose (even though the answers might already have been provided at the introductory lecture and even though many of their friends might know the answer). Perhaps a central person (a process leader or mediator) would need to do no more than to put his ear to the ground and listen to the concerns of those who are enmeshed in the day-to-day operations and then raise some of these issues into points that are to be discussed among colleagues?

"As we know, applicants for programming jobs are likely to be a rather clever bunch, so we can assume that a great deal of "cheating" will take place if they are giveen [personality] tests. But that should not worry us, for if they cheat successfully, they are probably going to have a number of he critical personality traits we desire - adaptability to sense the direction of the test, ability to tolerate the stress of being examined by someone they don't know under assumptions they cannot challenge, assertiveness to do something about it, and sense of humor enough to enjoy the challenge."

Perhaps I (and other teachers) filled out the Education Assessment Exercise not with the goal of providing "accurate" and "truthful" information, but rather with the intention of making my own courses sound as good a possible, and in doing so attempting to aggrandize myself as a teacher...(job security, salary, image and social standing)? Perhaps I did not set out to do that, but it might still be difficult for me to provide information about the courses I teach without implicitly/unconsciously promoting myself...? (...How do I want to come across to my superiors and to other (unknown) people reading what I produce? ...What should I write in order to convey these impressions?)

Perhaps I (and other teachers) did fill out the Education Assessment Exercise with the goal of providing "accurate" and "truthful" information - but that information might still turn out to be of questionable value to someone who has no knowledge of the underlaying reality behind the information provided (all those beautifully formulations lovingly crafted to reveal some some facts and conditions and to sweep other under the carpet...)? Distant decision makers have the map, but relationship between the map and the underlaying reality is shaky. To the person with a hammer everything will look like a nail, and to the person with a map, every obstacle will seem easily skirted from the comfort of his armchair...

My conclusion is that the value of tests and testing in general is greatly exaggerated! The implications of this statement for what students do at universities are fundamental and far-reaching. It might become the topic of a future blog post.

Inga kommentarer:

Skicka en kommentar