onsdag 29 juni 2011

Program-integrating course 2010/2011 summary

.
This will be a relatively long and rambling post and I might go back and edit it later. It is equally much a blog post that might be of interest for students who take the course (and who would like to hear the teacher's perspective) or for the nine teachers in the course as it is a summary of the course that I might go back to later and review.

Formal information & basic structure of the course
-------------------------------------------------------------
The program-integrating course [DM1578, programintegrerande kurs i medieteknik] is a 7-credit course. There are 250 or so students who take the course - all students who study for a degree in Media Technology are required to take it. The course runs for three years and we meet four times per year and discuss a pre-specified theme as well as student courses and experiences during the previous quarter. The students are divided into 36 groups with half a dozen of students in each group and we try to mix first-, second- and third-year students. Each teacher in the course lead four groups of students each and the teachers report back to me. Students get 3 credits after the first year and 2 credits after each of the following two years.

Development of the course over time
--------------------------------------------
This course sounded wonderful on paper when the whole program started, but quickly ran into problems and the program filled up with students. How do you fill a course such as this with content? How do you run it practically? I gave the course for a few years before my colleague Björn took it over three years ago. He changed the structure of the course (most probably for the better) according to what I wrote above. As he was away, I took over the course during the 2010/2011 semester.

Basic challenges of the course
-----------------------------------
The huge challenge here is to practically administer the course. It is difficult, to say the least, to keep track of 250 students that meet four times in a year. On top of that we have students that are on leave, that study abroad for a semester or two, that miss one out of four seminars (or two) for reasons that are sometimes better and sometimes worse (forgot, overslept, trip abroad etc.). One of the problems when I took over the course was that many students had, for some reason, not gotten last year's credit for the course. It is difficult for both me and for the students to know exactly why the didn't get their credits. There is probably a reason (besides administrative SNAFUs), but no-one remembers and these things are probably not documented in a way that easy to find and understand. It is detective work to find out why someone didn't get their credits. It is another task to gauge the seriousness of the lapse and to think of what kind of extra task needs to be done in order to get the credits. It takes time and it is boring.

The main contributions on my behalf during this year
----------------------------------------------------------------
- Better administrative routines for the teachers to document and report on students' performance throughout the academic year and the seminars.
- We had a lunch for the teachers in the course during the autumn and another one during the spring. This gave us the chance to discuss the course, routines and how to improve it and was great. Course development on the cheap (the only cost was the lunch itself).
- Better routines and rules for what happens when students miss smaller or larger parts of the course. Students collect "bad karma" when they fail to do what they are supposed to. Bad karma accrue and results in larger or smaller tasks to be completed at the end of the year. Better documentation of status of students who don't complete this year's course so as to better track them later.

The main things that I have a bad conscience for not having done (yet)
-----------------------------------------------------------------------------------
- There are students who completed this year's course, but now get in touch and tell me they didn't complete last year's course. It is a hassle to deal with the these students (takes time and requires boring detective work) and I have a backlog of cases that I have not fixed yet.

The main thing that will change in next year's course are:
- A theme will be chosen and the four tasks for the whole academic year should be formulated already at the beginning of the term (August/September) so that they can be portioned out later rather than hang above me as a dark cloud. It happened that the students got instructions for a seminar relatively late this year and that is unnecessary and could be fixed relatively easy.
- Even better administrative routines and instructions for the teachers. Students will not be allowed to jump to another teacher's group because the allotted time for a seminar did not fit the student. It so often happens that this information is lost and this results in an added administrative burden for me and the teachers (and the students) in an already hard-to-administer-course.

This is the course where students evaluate (discuss) all other course they take, but we don't spread around a course evaluation of this course itself.

måndag 20 juni 2011

Books I've read lately

.
I wrote pretty recently about my reading habits in general and I have also previously written about the work-related books I read during the autumn. I more or less read two non-fiction books per month of which one is work-related. Here is a summary of the books I've read during the spring:

The Swedish-language 2010 anthology "After the Pirate Bay" ["Efter the Pirate Bay"] contained no less than 19 contributions of which a bunch were written by persons I happen know (one contributor sits in the room next to mine). The contributors have a variety of backgrounds and while most are researchers (from a variety of disciplines), some have other backgrounds (for example a few journalists, a member of the parliament, an entrepreneur). Almost half of the researchers are ph.d. students (i.e. junior researchers probably in the 20's or perhaps early 30's). They write about a variety of issues and the book is divided into the three different parts; "technology", "pirates" and "politics". The book was naturally a little uneven, but it was in general a good read and one of the texts in fact gave me an idea I hope to be able to develop in a shorter text of my own. My main take-away from the book is how totally out of synch current copyright/IP laws are with the technology of our times, and, with common sense! (Not that I didn't know it before...) Although the book (of course) is available for free online, it is inexpensive enough (69 SEK) for anyone who wants to read it to order the paper version.

Several in-your-face examples of just how out of touch and absurd our laws for regulating Intellectual Property (IP) are were glaringly obvious while reading Lawrence Lessig's "Free culture: The nature and future of creativity" (2003). I notice that the more recent edition of the book has another subtitle: "How big media uses technology and the law to lock down culture and control creativity". That just about summarizes it all. Lessig (whose earlier book "Code: And other laws of cyberspace" impressed me a lot) sounds like a lonely sane voice in a desert of lobbyist-fed propaganda from industry dinosaur-titans trying their best to obstacles in the way of of "creative destruction" making the process short with their previous-century industrial age business models. By looking backwards and clinging to the past, organizations such as the Recording Industry Association of America (RIAA) and the Motion Picture Association of America (MPAA) squarely stand in the way of new, better business models that are more appropriate for the time we live in. Unfortunately these previous-century dinosaurs command hefty purses and thus do their best to strangle common-sense attempts to free the positively vast amounts of culture that has little or no economic value in the marketplace (because it is old or of interest to the few rather than to many etc.), but that in the age of affordable networked computers is of vast value to of our societies and to our culture. Even though Lessig is not alone, he is one of the best in formulating his critique against the current state of affairs. Lessig is an astute observer and a great writer - but I can't keep up with him as he has written a new book every second year during the previous decade.

Paul Edwards book "The closed world: Computers and the politics of discourse in cold war America" (1996) is a book I have owned for more than a decade and I have in fact started to read it once before but got stuck 30 or so pages into the book. Although the book is a little on the heavy side to read it is extremely well-researched and manages to convey a picture of pioneering work in the computer sciences that is radically different from most of what I have read before/elsewhere. Edwards ties the development of digital computers (and cognitive sciences) tightly to American cold war goals and mindsets and military dreams of creating cyborg human-machine interfaces and war machines - or even getting rid of the slow, unreliable human in that loop! There are many things in the book that "rewire" the standard history of computers, software, human-computer interaction, artificial intelligence (AI) and cognitive sciences, for example the vast (often non-directed) support from the military for basic research in computer science in general and AI i particular during its first 20 years as an academic field. The final - almost 50 pages long - chapter analyzes the computer in popular culture (with an emphasis on films) from the 1960's to the 90's (Dr. Strangelove, 2001, Tron, Star Trek, War Games, Star Wars, Terminator etc.). The analysis livens up and ties back to and applies Edward's previous analysis of the much "heavier" and "drier" material, and it is brilliant!

The last book I read during the spring was Richard Sennett's "Flesh and stone: The body and the city in western civilization" (1994). This is another book that I have owned for quite some time and while I have read other, later books by Sennett with pleasure I have to say that this book was both a little heavier as well as a little further away from my academic/research interests that I thought before I started to read it. The book treats the relationship between the human body and the city in a variety of ways; how humans move, adapt to and conduct their affairs in relation to the design of the city and city life, as well as how the city itself is a product of culturally bound conceptions about humans, their relationship to others as well as their religion, cosmology and world view. The featured cities are (primarily) the ancient greek city-state Athens, Rome, medieval Paris and Venice, industrial-age Paris and London and modern New York. While I found the book moderately interesting, it was not the easiest read and it is doubtful I will have any practical use for it in terms of my (wide-but-finite) research interests.

Have you read any of these books (or would you like to)? What is your opinion about them?
.

fredag 10 juni 2011

On the fallible nature of tests and testing

.
My previous blog posts/rants about top-down vs bottom-up approaches to improve education and the paradox of planning is here complemented with some thought-provoking quotes about tests from Gerald Weinberg's 1971 book "The psychology of computer programming". I have the 1998 Silver Anniversary edition, the quotes below are harvested from pages 154-155. I have commented these quotes so as to make the connections to issues raised in the preceding blog posts more explicit.

"A number of firms have used the Strong test for selecting programmers [...] making a judgement that a programmer is "like" a mathematician, engineer, writer, or what have you. Since the basis for these judgements is pure speculation, such selection procedures could have been equaled by throwing dice. Throwing dice, however, does not have the right sound to it. A personnel manager can say, "We use the Strong Vocational Interest Blank to help us select programmers." This will certainly impress his manger more than saying, "We throw dice to help us select programmers."

If you don't really really know what you are looking for or what you are measuring, throwing dice might produce comparable results. Investment advice by monkeys throwing darts have been proved to often (around 50% of the time) be just as good as the advice of the average "qualified" investment managers. If you really really don't know what you are evaluating (or more generally, doing), throwing dice might be an attractive low-effort low-cost alternative). :-)

"Even assuming that the profiles are available [...] do they really reflect what we want? After all, these profiles are obtained by testing people already in the profession, not the people we would necessarily want to be in it if we knew what we wanted [my emphasis]. In old, established, and stable professions, it may be valid to assume that people in the profession are, by and large, the ones who should be in it - even though they might have been steered there by [...] a self-fulfilling prophecy"

Think less of "people" and "professions" and more about "educational programs" and you notice the conserving power inherent in the preoccupation of evaluating what is, and what has been. How do you develop a world-class innovative university education? By evaluating what has worked fine before/elsewhere/up to now, or by taking stock of the future and setting out in a new direction - even, or especially if that turns out to be in another direction compared to merely extrapolating from the past? I presume "best standards" approaches are fine if you have your goals set for mediocracy or slightly-above-average, but in order to do better than that you really do need to think for yourself (yourselves) and look forward more often than you (anxiously) look backwards over your shoulder.

"Essentially all psychological tests [...] assume that the psychologists who made the tests are smarter than the people who take them. Indeed [...] people who are attracted to psychological testing as a profession [...probably...] hold themselves to be smarter than other people. Perhaps it could not be otherwise, for how would they get evidence to the contrary? [...] In a way, a personality test is an intelligence test - a matching of wits with the person who made the test."

If a monkey in the jungle stumbled upon an iPad or or some such high-tech gadget, he would probably think it is a very stupid, or silly object - and how could it be otherwise? Do the people who create a "test" such as the Education Assessment Exercise, as well as those who buy in and carry it out, as well as the those who analyze the results and those who act on these results "by necessity" think that they are smarter than the people who "get the short end of the stick" and whose only role is to merely provide them with information? (Yep! It is an unequal relationship.) Will they "by necessity" think that their conclusions about "what is to be done" is of a higher caliber than the opinions of the flesh-and-blood university teachers who by filling out these tedious forms provide them with truthful (?) information on which they are to act?

Instead of filling out forms and providing others with information (and putting our hope in wise decisions made by some nebulous "them" residing elsewhere), a better way to improve an education might be to create the space-time for those who actually teach to regularly meet and discuss issues that they find necessary, interesting or problematic in their day-to-day, month-to-month or year-to-year activities! To "create time" means (for example) to protect the time of the faculty at a university from incursions by others - including centrally initiated requests for this-and-that or (some) students' extravagant expectations of getting personal answers by email within a day to any and every question they might pose (even though the answers might already have been provided at the introductory lecture and even though many of their friends might know the answer). Perhaps a central person (a process leader or mediator) would need to do no more than to put his ear to the ground and listen to the concerns of those who are enmeshed in the day-to-day operations and then raise some of these issues into points that are to be discussed among colleagues?

"As we know, applicants for programming jobs are likely to be a rather clever bunch, so we can assume that a great deal of "cheating" will take place if they are giveen [personality] tests. But that should not worry us, for if they cheat successfully, they are probably going to have a number of he critical personality traits we desire - adaptability to sense the direction of the test, ability to tolerate the stress of being examined by someone they don't know under assumptions they cannot challenge, assertiveness to do something about it, and sense of humor enough to enjoy the challenge."

Perhaps I (and other teachers) filled out the Education Assessment Exercise not with the goal of providing "accurate" and "truthful" information, but rather with the intention of making my own courses sound as good a possible, and in doing so attempting to aggrandize myself as a teacher...(job security, salary, image and social standing)? Perhaps I did not set out to do that, but it might still be difficult for me to provide information about the courses I teach without implicitly/unconsciously promoting myself...? (...How do I want to come across to my superiors and to other (unknown) people reading what I produce? ...What should I write in order to convey these impressions?)

Perhaps I (and other teachers) did fill out the Education Assessment Exercise with the goal of providing "accurate" and "truthful" information - but that information might still turn out to be of questionable value to someone who has no knowledge of the underlaying reality behind the information provided (all those beautifully formulations lovingly crafted to reveal some some facts and conditions and to sweep other under the carpet...)? Distant decision makers have the map, but relationship between the map and the underlaying reality is shaky. To the person with a hammer everything will look like a nail, and to the person with a map, every obstacle will seem easily skirted from the comfort of his armchair...

My conclusion is that the value of tests and testing in general is greatly exaggerated! The implications of this statement for what students do at universities are fundamental and far-reaching. It might become the topic of a future blog post.
.

tisdag 7 juni 2011

The paradox of planning

.

How much time should be spent planning an event in relation to the time spent actually doing it? How much time should be spent planning a study or a software project in relation to the time spend actually doing that study or project? How much time should be spent evaluating a course in relation to the time spent actually teaching it?

When it comes to doing studies, most "junior researchers" (university students writing their theses) often spend way too little time planning something before they jump in both feet first. That is a pity, because by beforehand thinking over all the steps of a study and the process of performing one, you can "debug", find and solve problems before you even start - rather than halfway in. The same goes for writing a large, complex computer program - time spent planning is undervalued, and especially so by junior programmers.

But after you have written that computer program, how much time should be spent documenting it? A traditional problem in software engineering is that very little time is spent on documentation and the short answer would thus be "more". At the same time, there is a law of diminishing returns at play both in the case of documenting computer code or planning a study. Spending twice as much time does not necessarily lead to twice as good results, and spending ten or a hundred times as much definitely won't. At some point, it's time to say "enough".


Moving to the university setting, how much time should be spent planning and giving a course, and how much time should be spent administering and documenting that very same course? At some point, any endeavor will face a diminishing marginal utility of pouring more resources into it. My previous blog post concerned the diminishing marginal utility of trying to control and improve the quality of higher education by top-down approaches (such as through performing "Education Assessment Exercises"). If I give a course this year and I will give the same course next year, how much need is there really to document the course in excruciating detail? I can understand the need to do so if I were to hand of the course to someone else, but if I don't plan to do so, is it not enough that I do the work of thinking through the course, rather than writing lots of stuff up just for the sake of it?

My point is that those times that I have written up a course evaluation, nothing has come from it. Sure, someone may have spent a few minutes looking at it, but that is of no special use for me. So why should I be forced to write things up just for the sake of it?

On the other hand, I would be passionately willing to discuss the course in question with someone who has a sympathetic ear and who might offer suggestions for improvements, but this service has never been offered to me because someone else's time is a cost, but my time is free (in the eyes of my superiors). This is another example of a top-down approach (of ordering me to provide information that no-one ever reads, or at least never acts upon) and a bottom-up approach (of supporting me as a teacher with those things I perceive as problems in my day-to-day activities). It also unveils power relationships between me as a lowly and exchangeable provider of educational services, a cog in the machinery in an industrial model of education - and those who run the organization or the whole system of higher education in Sweden. In this model, students are the raw materials and teachers stand at a conveyor belt "treating" them in different ways until we in the end turn out "quality-assured" cookie-cutter engineers.

We never have enough money to provide hands-on support for teachers to help improve their courses, but we always have money to make sure that someone writes an email to remind me to fill out the course evaluation document and to perform an EAE assessment. Who reads the course evaluation document? How will the EAE assessment be used? Who knows? I for sure don't know! How come? Who knows?

.

fredag 3 juni 2011

Program integrating course

.
We have this course in our program where all students (250+) meet once every quarter of the year in order to reflect upon and discuss the courses of the previous period. It is for the most part a really good course, but it runs over three years and managing the administrative backend of the course is indeed a hairy affair. Unfortunately it is my task to it as I have been responsible for the course this year.

I can be very details-oriented, so I manage, but it is quite boring and it takes a lot of time. To track down the students who did not perform and to force them to an extra task (which some of them object to) is not the most fun of things to do either. Sometimes I have to invoke detective skills to try to make sense of things that happened half a year ago or more. But someone has to do it.

I've now spent several days just summing up the results of four seminars during the past academic year, with hundreds of students, divided into 36 groups, led by 9 teachers, all of whom report to me. The quality of the documentation I get from the teachers vary and so errors do creep in and I then have to juggle students that are upset or angry. Sometimes the anger is justified (errors to happen) but oftentimes it is not.

Fortunately I'm (almost) finished now. Phew!
.

tisdag 31 maj 2011

Top-down vs. bottom-up

.

Over the weekend, I've thought a little bit more about exactly why the exercise I wrote about in the previous blog post (Education Assessment Exercise) matters to me. It is obviously not the two hours I spent (wasted?) on the exercise itself that matters, but rather something else that bothers me. What?

I believe it really is the folly of someone, somewhere thinking such an exercise will magically (?) improve the state of our education that bothers me. I might live under the misconception that there is goodwill involved and that this exercise is an honest (but misguided) attempt to improve what we do. A more devious alternative interpretation is that it is all about power and control, and that quality of has little to do with it. Still, I will here operate under the first assumption and spell out why I think it is misguided.

I'd like to draw a parallel to the gentlemant-"scientist" who, at the height of the British empire, sits in his comfortable leather chair somewhere in the greatest city of them all, London, and reads accounts from all around the world (i.e. the British empire) about "savages" and their affairs. Based on these reports - themselves of varying quality - he performs some magic sleight of hand. Through an act of armchair science (i.e. not getting his hand dirty by working with actual empirical material, but rather using others' interpretations of others' interpretations as his "research material" to be analyzed) he puts it all together into one grand unified theory about cultures and races. He "reasonably" draws the conclusion that savages at best can be compared to children and that "we" (white British imperialists) obviously are doing them all a great favor by ruling their countries and "taking care" of them. Later and based on his "unified theory of savagery and governance", the Empire finds support for a variety of policies that one hundred years later look, well, "strange".

Such one-size-fits-all theories take little care about unique idiosyncrasies of specific cultures and geographies into account and suffer from a whole lot of other intractable problems too. The lack of high-quality information, and the folly of drawing sweeping conclusions without any (own) first-hand observations has since been heavily discredited in scientific contexts. Scientist and science fiction author Isaac Asimov does a great job of portraying such dismal scientific processes on a grand scale in the aging, dying galactic empire in his Foundation trilogy. In an old, tired empire stretching over innumerable worlds, it is considered crude to collect actual empirical material, and refined to add yet another layer to what there already is. Armchair science in a culture no longer interested in the present or the future is in fact a defining characteristic of this vast empire in decline.

I find that the some of the same mechanisms of trying to understand and rule at a distance ("top-down") are at play in a large bureaucratic organization such as KTH. I would suggest that one example of this "syndrome" is that there is vast overconfidence in the possibility of improving the content and the quality of whole university programs and individual courses through top-down measures, and a corresponding lack of confidence in the alternative; in the possibility of reaching for the same kinds of improvements through bottom-up measures.

To raise quality through bottom-up measures necessitates a high degree of trust in the faculty and in the teachers who are the nuts and bolts of the teaching effort. I would go as far as to say that there is little such trust in place today. I also freely admit that such trust can obviously sometimes be misplaced. However, if there is a perceived need to (in infinite detail) (attempt to) control every teacher and every course through copious and detailed instructions regarding this-and-that, this all really just signals that the average teacher is not to be trusted by the very organization she works for. But seriously, what can realistically be done at all if a university does not trust its core personnel - the people who actually teach?

It should be obvious that the most important task of all other personnel at a university should be to support those who actually teach; those who meet students in classroom situations and who try to impart some knowledge and at times hopefully even some wisdom. To me - an small cog in the machinery - it oftentimes feels like it's the other way around - other groups of employees (administrators, bosses) make demands on my and other teachers' time not the least because (from their point of view) our time is always free of charge. If I on the other hand would like to make demands on some other people's time within this organization (where have all the secretaries gone nowadays?), the gut reaction is instead to deem it expensive and therefore unrealistic.

I personally think that one of the best and least expensive ways to improve individual courses (and consequently whole educational programs) would be to set up high standards and requirements as well as high-quality support for the individual teacher. I most often feel that neither is in place, and I would personally not appreciate high demands without also having a high degree of support (as in "every man for himself", or "sink or swim"). As a university teacher, I am of course free to develop and improve my courses as much as I would like to - or not. There is unfortunately little official support for, and few consequences of not doing so.

.

fredag 27 maj 2011

The folly of our Education Assessment Exercises (EAE)

.
I took part in an Education Assessment Exercise (EAE) a week ago. I can't say that it was "traumatic" or anything like that, but it did point out several things I find strange (e.g. stupid) about the individually easy, but in total overbearing administrative duties of a university teacher.

The EAE exercise itself took less than two hours - no big deal - but it still felt like a relatively futile attempt to "capture" important things about the courses I and other teachers teach at our program. I had a large sheet (A3-sized) with 11 high-level goals for our education. These goals were an amalgam of a variety of different (worthy) goals, but the results were cumbersome to say the least. Here are three examples of these 11 goals. Our students should:

- "demonstrate broad knowledge and understanding in scientifically based proven experience in the chosen area of technology (the main area), including expertise in math and science, significantly advanced knowledge in some parts of the area, and deepened insights into current research and development. Students should also demonstrate deepened methodological knowledge in their chosen technical area."

- "demonstrate an ability to holistically, critically, independently and creatively identify, formulate and manage complex problems and demonstrate the skills required to participate in research and development, or to perform independent work in other advanced contexts and thus contribute to the development of new knowledge."

- "demonstrate an understanding of capabilities and limitations in science and technology, its role in society and our responsibility concerning their use, including social and economic aspects as well as environmental and work safety aspects."

Do you get it? These goals are very "fluffy" and imprecise, or alternatively all-encompassing. Furthermore they overlap (to figure out how much they overlap is in itself an advanced task). A third of the text/terms were marked in bold, signifying that they were extra important and there were a lot of "extra important" terms in these texts... There were a not insignificant amount of spelling errors in these texts - implying that they had not been prepared with a lot of care (much like this text, with the difference being that I don't force my colleagues to read it :-).
For each of the courses that I teach, I then had to specify if any of these goals were covered by 1) the (stated) goals for the course in question, and if so 2) what "learning activities" they corresponded to in the course, and if so 3) how these goals and activities played a part in the examination (and grading) of that course. Phew!

To fill out this form to me felt like a cross between a jig-saw puzzle and a big waste of time. My tiny act of rebellion was to ask/comment out loud that it didn't feel like this exercise and the relatively carefully crafted snippets of text that I produced really captured what was important about the courses in question. The answer I got was not very enlightening and had something to do with the fact that "we all had to do it anyway".

Now, I can understand that the one person who is responsible for our education in Media Technology (Björn) might have some use of these artifacts (the filled-in sheets), but I'm sure he could have gained much better information by performing short "interviews" me and other teachers. I'm also very not convinced that the effort or performing this exercise (including Björn's effort to try to interpret and act on the information that teachers provide) will stands in parity with the potentially positive results of the exercise.

My constructive suggestion would have been for Björn to himself have done the intellectually taxing/deadening work of trying to make sense out of these wonderfully-sounding but overlapping, slightly overbearing and potentially vacuous goals, and then used his interpretation as a starting point for conversations with the teachers. Instead, all teachers individually and redundantly had to do the work of trying to understand what all these (fluffy, overlapping) goals meant (and implied), what was wanted of us by this exercise (and by the person(s) who will interpret our scribblings), and then do our best of fudging up answers that sounded maximally impressive and convincing. If different teachers interpret the goals in different ways, we will provide information and answers that in subtle ways will answer not the same, but different questions. The impression I wanted to convey when I filled out the EAE sheets was that my courses turn every student into a Leonardo da Vinci (or at least tries to...).

I want to make clear that I have only the highest of respect for Björn and what he does. Perhaps he would say that he doesn't have the time to perform individual interviews with teachers about their courses. His answer is valid, but implies that this task might then not be so important after all. That's fine, but if it's not worth doing well, then it might not be worth doing it at all. Also, although Björn is the person who is best suited for this task, it does not necessarily have to be him personally who does it. The task could be outsourced to someone else (although I would personally decline).

This exercise was as much a test of my personal ability to interpret and act on complex texts and to make (a perhaps sometimes rather bleak) reality sound like heaven on earth, as it was to convey some sort of "objective" or useful information to someone higher up in the chain of command. I do agree that this exercise can emphasize that the goals of a course are not (any longer) in line with the activities in the course, but finding this out could be done in many other ways of which basically all are much simpler and easier than performing this exercise...

What I especially object to in this case is the fact that the quality of the information I provide is so low that I have little confidence that it is of use to anyone who will look at it, and that the results of any actions or recommendations that comes out of this exercise will be so out of touch of the realities of teaching that they might hinder as much as they might help. It is basically a lottery. One unfortunate possible outcome could be that teachers will be ordered to provide more informations about the courses they give in order to... well, something. For example to keep other people (newly hired exclusively for this purpose?) somewhere in the organization "in the loop"? Would this help to improve the quality of the education? Of course not.

This blog post just scratches on the surface of uncovering the errors in thinking that this exercise is just a symptom of. I hope to write a follow-up post about the folly of using precious resources (including my time) to try to improve education in this "top-down" manner.
.

fredag 20 maj 2011

This year's media technology bachelor student theses

.
Our third-year students presented their bachelors theses this week. We strongly encourage our students to write these in pairs. I have been the advisor ("demon producer"?) of five students (three theses), and I have also been the examiner of 11 theses (where I read, provide feedback, judge and grade them).


The theses that I have "demon produced" during the spring (all written in Swedish) were called:

- "Carbon dioxide currency with individual carbon dioxide rations" [Koldioxidvaluta - med individuella koldioxidransoner].
This thesis is based on my suggested thesis topic "carbon dioxide currency" (in Swedish).

- "Starcraft: A spectator sport for a wider audience?" [Starcraft: En åskådarsport för bred publik?]
Can the popular computer game Starcraft 2 could become a "spectator sport" in Sweden?

- "The economy around professional e-sport players, with a focus on Counter-Strike"
Where does the money come from to support professional computer gamers? Who sponsors these professionals and why? What are the (economic) conditions of professional gamers?


The thesis that I examined were called:

- "Course evaluation system for students based on user generated content"
- "Live concerts, digitally through time and space"

Quite a few (more than half!) were related to social media (blogs, Facebook, Twitter, user-generated content).

The students should provide English-language titles but since they haven't handed in the final version, some haven't yet and I might thus retroactively alter some title above.

I will eventually link the list above to the final texts. We publish all student theses on the web nowadays but I don't know if that will happen before the summer (June) or after.

Almost all theses are written in Swedish. The quality of the the work (and everything that goes into it - the planning, the research question, the methods chosen, the investigation itself, the analysis and the flow of the actual text in the report itself) of course varies widely...
.

tisdag 10 maj 2011

What do our ex-students work with?

.
For the longest of time, I have floated the idea of an "individual course" to map where all our graduated media technology students work and what they do. I started plugging the idea more than half a year ago and I know for sure that I have reached all our (current) students (around 250 or so). Despite this, no-one has expressed any interest in doing this - until now. In short time no less than three students have gotten in touch with me and expressed interest in doing this task.

My original idea came from seeing the wonderful information about work/employer as well as information about personal networks that is accessible in LinkedIn. I never initiate, but almost always accept LinkedIn invitations from students of mine. I thus have an extensive network of contacts and I can also see their contacts and LinkedIn networks. I'm sure it is possible to find most of our alumni through these (LinkedIn) networks and then sift through the information about current (and perhaps previous) employers and what they work with. Are they consultants, entrepreneurs or web programmers? Do they work with video conferencing systems, in the game industry or with audio books?

I have this far refrained from elaborating on the exact character of the mapping task that a student should do, as I think it is more appropriate to do this elaboration together with the student in question who take it upon him- or herself to do this task. The thought was to craft this task according to my and the student's own interests (and where my interests represent the interests of the whole department). We know way to little about what our students do, but would sure all like to know more about it. An individual course comes in different flavors so it would be possible to do this task for credits representing 4, 5 or 6 weeks of full-time studies (160 - 240 hours of "course work").

As it turned out, two persons got in touch last week and we had a meeting together just in the beginning of this week. It now seems the plan is for them to expand the size and the scope of the task and shape this into a bachelor's thesis project that they will do together. It is as of this moment unclear if they can do their thesis (with me as their advisor) now/during the autumn, or if they have to wait and do it next spring together with all the other third-year students. I myself am divided about this issue. On the one hand it would be a drag to wait until next May to have the results of this study. On the other hand it is much easier and more convenient for them and for me if they do it within the structure of the bachelor's thesis "course" - and that course is only given during the spring term. Our (mine and their) preferences might anyway be moot as it is not clear that they will at all be allowed to do their bachelor's thesis outside of the course (more work for everyone involved, less structural support for them).

Anyway, it is all very exciting that some students have finally risen to the challenge. From their perspective, they are equally or more interested than the faculty in finding out what our alumni are doing nowadays. Just as we (teachers) would like to know more about this, so would the students themselves (both these individuals and the larger student body) like to know where they will work in the future! So beyond a personal interest, we all feel that this project would in part be a "civic duty" that many others would be interested in and benefit from.

As it so happens, right after my meeting with the students, yet another student has gotten in touch with me to ask about doing this as an individual course. I might involve her too, she might be able to do some work that would be of use to the other two students (especially if they will not commence their bachelor's theses until next spring). I will involve her in our soon-to-be four-way talks and we'll see how it turns out...
.

söndag 1 maj 2011

The reflective engineer

.
I gave a lunch talk in a student-led "project" called The reflective engineer this past week. It is one of the projects that the student organization Sustainable Engineering Everywhere - a KTH student organization (SEEK) organizes and there was a surprisingly large audience, perhaps 60 or 70 students or so. I don't know if it was my talk or the free lunch that got students there...

I built on the talk I have half a year earlier at the House of Science and connected peak oil to the future of the Internet. I unfortunately only had 45 minutes and that left only a few minutes at the end of the talk for questions. We had time for two and both were good.

I stayed and answered some questions and in fact had an hour-long conversation with an Iranian guest student who had been an oil trader earlier.

I realized afterwards (a few days later) that my talk probably was the first occasion for many students to hear about peak oil. Talks such as these give me an excellent platform to talk about issues that are important for me. So it is a pity to only have a few minutes for questions. What I realized is that I should have offered to sit down in the café outside the lecture hall for half an hour and talk or answer questions from those students who felt the need to probe thee questions that were raised further. That would also have been very valuable for me; I could have gotten some feedback and a better feeling for how my talk was understood and processed by the students. This did alas not happen this time around but I have decided to do so from now on every time the opportunity presents itself!
.