This is my (academic) blog and homepage. Read the short introduction to the blog here. I write for the benefit of those who wish to keep up with what I do and also for me to remember what I did last week!
fredag 26 augusti 2011
The future of radio / Radio of the future
torsdag 30 juni 2011
Thesis crash
onsdag 29 juni 2011
Program-integrating course 2010/2011 summary
måndag 20 juni 2011
Books I've read lately
fredag 10 juni 2011
On the fallible nature of tests and testing
tisdag 7 juni 2011
The paradox of planning
.
How much time should be spent planning an event in relation to the time spent actually doing it? How much time should be spent planning a study or a software project in relation to the time spend actually doing that study or project? How much time should be spent evaluating a course in relation to the time spent actually teaching it?
When it comes to doing studies, most "junior researchers" (university students writing their theses) often spend way too little time planning something before they jump in both feet first. That is a pity, because by beforehand thinking over all the steps of a study and the process of performing one, you can "debug", find and solve problems before you even start - rather than halfway in. The same goes for writing a large, complex computer program - time spent planning is undervalued, and especially so by junior programmers.
But after you have written that computer program, how much time should be spent documenting it? A traditional problem in software engineering is that very little time is spent on documentation and the short answer would thus be "more". At the same time, there is a law of diminishing returns at play both in the case of documenting computer code or planning a study. Spending twice as much time does not necessarily lead to twice as good results, and spending ten or a hundred times as much definitely won't. At some point, it's time to say "enough".
Moving to the university setting, how much time should be spent planning and giving a course, and how much time should be spent administering and documenting that very same course? At some point, any endeavor will face a diminishing marginal utility of pouring more resources into it. My previous blog post concerned the diminishing marginal utility of trying to control and improve the quality of higher education by top-down approaches (such as through performing "Education Assessment Exercises"). If I give a course this year and I will give the same course next year, how much need is there really to document the course in excruciating detail? I can understand the need to do so if I were to hand of the course to someone else, but if I don't plan to do so, is it not enough that I do the work of thinking through the course, rather than writing lots of stuff up just for the sake of it?
My point is that those times that I have written up a course evaluation, nothing has come from it. Sure, someone may have spent a few minutes looking at it, but that is of no special use for me. So why should I be forced to write things up just for the sake of it?
On the other hand, I would be passionately willing to discuss the course in question with someone who has a sympathetic ear and who might offer suggestions for improvements, but this service has never been offered to me because someone else's time is a cost, but my time is free (in the eyes of my superiors). This is another example of a top-down approach (of ordering me to provide information that no-one ever reads, or at least never acts upon) and a bottom-up approach (of supporting me as a teacher with those things I perceive as problems in my day-to-day activities). It also unveils power relationships between me as a lowly and exchangeable provider of educational services, a cog in the machinery in an industrial model of education - and those who run the organization or the whole system of higher education in Sweden. In this model, students are the raw materials and teachers stand at a conveyor belt "treating" them in different ways until we in the end turn out "quality-assured" cookie-cutter engineers.
We never have enough money to provide hands-on support for teachers to help improve their courses, but we always have money to make sure that someone writes an email to remind me to fill out the course evaluation document and to perform an EAE assessment. Who reads the course evaluation document? How will the EAE assessment be used? Who knows? I for sure don't know! How come? Who knows?
.
fredag 3 juni 2011
Program integrating course
tisdag 31 maj 2011
Top-down vs. bottom-up
.
Over the weekend, I've thought a little bit more about exactly why the exercise I wrote about in the previous blog post (Education Assessment Exercise) matters to me. It is obviously not the two hours I spent (wasted?) on the exercise itself that matters, but rather something else that bothers me. What?
I believe it really is the folly of someone, somewhere thinking such an exercise will magically (?) improve the state of our education that bothers me. I might live under the misconception that there is goodwill involved and that this exercise is an honest (but misguided) attempt to improve what we do. A more devious alternative interpretation is that it is all about power and control, and that quality of has little to do with it. Still, I will here operate under the first assumption and spell out why I think it is misguided.
I'd like to draw a parallel to the gentlemant-"scientist" who, at the height of the British empire, sits in his comfortable leather chair somewhere in the greatest city of them all, London, and reads accounts from all around the world (i.e. the British empire) about "savages" and their affairs. Based on these reports - themselves of varying quality - he performs some magic sleight of hand. Through an act of armchair science (i.e. not getting his hand dirty by working with actual empirical material, but rather using others' interpretations of others' interpretations as his "research material" to be analyzed) he puts it all together into one grand unified theory about cultures and races. He "reasonably" draws the conclusion that savages at best can be compared to children and that "we" (white British imperialists) obviously are doing them all a great favor by ruling their countries and "taking care" of them. Later and based on his "unified theory of savagery and governance", the Empire finds support for a variety of policies that one hundred years later look, well, "strange".
Such one-size-fits-all theories take little care about unique idiosyncrasies of specific cultures and geographies into account and suffer from a whole lot of other intractable problems too. The lack of high-quality information, and the folly of drawing sweeping conclusions without any (own) first-hand observations has since been heavily discredited in scientific contexts. Scientist and science fiction author Isaac Asimov does a great job of portraying such dismal scientific processes on a grand scale in the aging, dying galactic empire in his Foundation trilogy. In an old, tired empire stretching over innumerable worlds, it is considered crude to collect actual empirical material, and refined to add yet another layer to what there already is. Armchair science in a culture no longer interested in the present or the future is in fact a defining characteristic of this vast empire in decline.
I find that the some of the same mechanisms of trying to understand and rule at a distance ("top-down") are at play in a large bureaucratic organization such as KTH. I would suggest that one example of this "syndrome" is that there is vast overconfidence in the possibility of improving the content and the quality of whole university programs and individual courses through top-down measures, and a corresponding lack of confidence in the alternative; in the possibility of reaching for the same kinds of improvements through bottom-up measures.
To raise quality through bottom-up measures necessitates a high degree of trust in the faculty and in the teachers who are the nuts and bolts of the teaching effort. I would go as far as to say that there is little such trust in place today. I also freely admit that such trust can obviously sometimes be misplaced. However, if there is a perceived need to (in infinite detail) (attempt to) control every teacher and every course through copious and detailed instructions regarding this-and-that, this all really just signals that the average teacher is not to be trusted by the very organization she works for. But seriously, what can realistically be done at all if a university does not trust its core personnel - the people who actually teach?
It should be obvious that the most important task of all other personnel at a university should be to support those who actually teach; those who meet students in classroom situations and who try to impart some knowledge and at times hopefully even some wisdom. To me - an small cog in the machinery - it oftentimes feels like it's the other way around - other groups of employees (administrators, bosses) make demands on my and other teachers' time not the least because (from their point of view) our time is always free of charge. If I on the other hand would like to make demands on some other people's time within this organization (where have all the secretaries gone nowadays?), the gut reaction is instead to deem it expensive and therefore unrealistic.
I personally think that one of the best and least expensive ways to improve individual courses (and consequently whole educational programs) would be to set up high standards and requirements as well as high-quality support for the individual teacher. I most often feel that neither is in place, and I would personally not appreciate high demands without also having a high degree of support (as in "every man for himself", or "sink or swim"). As a university teacher, I am of course free to develop and improve my courses as much as I would like to - or not. There is unfortunately little official support for, and few consequences of not doing so.
.
fredag 27 maj 2011
The folly of our Education Assessment Exercises (EAE)
I took part in an Education Assessment Exercise (EAE) a week ago. I can't say that it was "traumatic" or anything like that, but it did point out several things I find strange (e.g. stupid) about the individually easy, but in total overbearing administrative duties of a university teacher.
The EAE exercise itself took less than two hours - no big deal - but it still felt like a relatively futile attempt to "capture" important things about the courses I and other teachers teach at our program. I had a large sheet (A3-sized) with 11 high-level goals for our education. These goals were an amalgam of a variety of different (worthy) goals, but the results were cumbersome to say the least. Here are three examples of these 11 goals. Our students should:
- "demonstrate broad knowledge and understanding in scientifically based proven experience in the chosen area of technology (the main area), including expertise in math and science, significantly advanced knowledge in some parts of the area, and deepened insights into current research and development. Students should also demonstrate deepened methodological knowledge in their chosen technical area."
- "demonstrate an ability to holistically, critically, independently and creatively identify, formulate and manage complex problems and demonstrate the skills required to participate in research and development, or to perform independent work in other advanced contexts and thus contribute to the development of new knowledge."
- "demonstrate an understanding of capabilities and limitations in science and technology, its role in society and our responsibility concerning their use, including social and economic aspects as well as environmental and work safety aspects."
Do you get it? These goals are very "fluffy" and imprecise, or alternatively all-encompassing. Furthermore they overlap (to figure out how much they overlap is in itself an advanced task). A third of the text/terms were marked in bold, signifying that they were extra important and there were a lot of "extra important" terms in these texts... There were a not insignificant amount of spelling errors in these texts - implying that they had not been prepared with a lot of care (much like this text, with the difference being that I don't force my colleagues to read it :-).
For each of the courses that I teach, I then had to specify if any of these goals were covered by 1) the (stated) goals for the course in question, and if so 2) what "learning activities" they corresponded to in the course, and if so 3) how these goals and activities played a part in the examination (and grading) of that course. Phew!
To fill out this form to me felt like a cross between a jig-saw puzzle and a big waste of time. My tiny act of rebellion was to ask/comment out loud that it didn't feel like this exercise and the relatively carefully crafted snippets of text that I produced really captured what was important about the courses in question. The answer I got was not very enlightening and had something to do with the fact that "we all had to do it anyway".
Now, I can understand that the one person who is responsible for our education in Media Technology (Björn) might have some use of these artifacts (the filled-in sheets), but I'm sure he could have gained much better information by performing short "interviews" me and other teachers. I'm also very not convinced that the effort or performing this exercise (including Björn's effort to try to interpret and act on the information that teachers provide) will stands in parity with the potentially positive results of the exercise.
My constructive suggestion would have been for Björn to himself have done the intellectually taxing/deadening work of trying to make sense out of these wonderfully-sounding but overlapping, slightly overbearing and potentially vacuous goals, and then used his interpretation as a starting point for conversations with the teachers. Instead, all teachers individually and redundantly had to do the work of trying to understand what all these (fluffy, overlapping) goals meant (and implied), what was wanted of us by this exercise (and by the person(s) who will interpret our scribblings), and then do our best of fudging up answers that sounded maximally impressive and convincing. If different teachers interpret the goals in different ways, we will provide information and answers that in subtle ways will answer not the same, but different questions. The impression I wanted to convey when I filled out the EAE sheets was that my courses turn every student into a Leonardo da Vinci (or at least tries to...).
I want to make clear that I have only the highest of respect for Björn and what he does. Perhaps he would say that he doesn't have the time to perform individual interviews with teachers about their courses. His answer is valid, but implies that this task might then not be so important after all. That's fine, but if it's not worth doing well, then it might not be worth doing it at all. Also, although Björn is the person who is best suited for this task, it does not necessarily have to be him personally who does it. The task could be outsourced to someone else (although I would personally decline).
This exercise was as much a test of my personal ability to interpret and act on complex texts and to make (a perhaps sometimes rather bleak) reality sound like heaven on earth, as it was to convey some sort of "objective" or useful information to someone higher up in the chain of command. I do agree that this exercise can emphasize that the goals of a course are not (any longer) in line with the activities in the course, but finding this out could be done in many other ways of which basically all are much simpler and easier than performing this exercise...
What I especially object to in this case is the fact that the quality of the information I provide is so low that I have little confidence that it is of use to anyone who will look at it, and that the results of any actions or recommendations that comes out of this exercise will be so out of touch of the realities of teaching that they might hinder as much as they might help. It is basically a lottery. One unfortunate possible outcome could be that teachers will be ordered to provide more informations about the courses they give in order to... well, something. For example to keep other people (newly hired exclusively for this purpose?) somewhere in the organization "in the loop"? Would this help to improve the quality of the education? Of course not.
This blog post just scratches on the surface of uncovering the errors in thinking that this exercise is just a symptom of. I hope to write a follow-up post about the folly of using precious resources (including my time) to try to improve education in this "top-down" manner.
.