My previous blog post treated the pre-conference NordiCHI worshop we organised. This blog post treats the rest of the conference - the 8th Nordic Conference on Human-Computer Interaction. Finland at the end of October was windy and cold, but not that much colder than Stockholm. The fact that I rented a huge 200 m2 luxurious apartment right next-door to the conference venue and shared it with colleagues was excellent compared to renting a small (and more expensive) hotel room. The fact that I took the boat to Helsinki was also pretty awesome - despite having to fly back to Stockholm to receive a guest at KTH the next morning. The organisation of the conference was excellent!
There was a record-breakning number of participants at NordiCHI - more than 500 attendees of which (only) 45 were Swedes. I was quite surprised to learn that with the exceptions of Finns, the two countries with the most participants were the UK and Germany. Only then (4th place) came Sweden followed by Denmark and Norway. Perhaps NordiCHI will be held in the UK or in Germany one of these days? Beyond practical aspects, what about the actual content of the conference then?
I was unfortunately not equally impressed with the content of the conference. The fact that mine and Barath Raghavan's paper was the only paper about sustainability was a pity, but I can live with that. If I didn't know it before, I have by now figured out that I am only interested in some of what falls within "mainstream HCI" research. I guess I didn't much notice that at the much larger CHI conference half a year ago - since there were no less than 15 parallell tracks to choose between. With "only" three parallell tracks, it was easier to notice that quite some of the content did not raise my pulse significantly. It was of course nowhere near as bad as when I went to a computer games conference at the end of the summer and realised I was hardly interested in anything at all, but still, I guess my research interests nowadays primarily focus on sustainability and only secondarily on HCI. Some of the presented papers were interesting, but quite some of them weren't (to me).
Don Norman was the opening keynote speaker. I heard him talk when he passed by UC Irvine this past spring but that talk for the most part concerned the new, heavily updated and reworked version of his classic book "The design of everyday things". Norman talked about the new playing field - kind of a comparison of what Human-Computer Interaction was when he wrote "The design of everyday things" in 1988 compared to today. The very term "Human-Computer Interaction" was coined at a time when the typical situation was one human interacting with one computer. Computing power is now all around us and HCI professionals are asked to solve problems of a very different character; redesign the interface to Obamacare, or, redesign the public administration of Singapore (true story). Norman also talked about what humans are good at and what computers are good at and what happens when things go wrong. His was a cautionary tale. Automation is a great thing - until it isn't. When something goes wrong, the human in the loop is rusty and less prepared to take care of the "extraordinary" problem in question. This is basically a repretition of what Bainbridge wrote in her classic article on the "Ironies of automation" (pdf) 30 years ago, but it is cautionary tale and a lesson worth repeating time and time again. Norman went into some detail about "challenges" of self-driving cars and massively networked, complex technologies. Technocratic engineers automate what can easily be automated and leave the rest - the really difficult stuff - to human operators. Travelling at 100 km/h, you only have 1 or 2 seconds to react before you are out of time, but when things (invariably) go wrong, we still always blame "the human factor". I personally made the connection to McLuhan's thoughts about technologies as both extensions of our bodies - our voices can reach longer through radio, our eyes can see farther through TV and our bodies can lift heavier loads when we sit inside an excavator - as well as amputations when our muscles and our skills atrophy:
"Marshall McLuhan taught us [that] "every extension is [also] an amputation." By this he meant that when we rely on technical systems to ameliorate the burdens of everyday life, we invariably allow our organic faculties to atrophy to a corresponding degree. [...] Elevators allow us to live and work hundreds of feet into the air, but we can no longer climb even a few flights without becoming winded. Cars extend the radius of our travels by many times, but it becomes automatic to hop into one if we're planning to travel any further than the corner store" (from Greenfield, 2010, "Everyware: The dawning age of ubiquitous computing".)
My colleague, Henrik Åhman, who has never been to a CHI conference before, was quite unimpressed (disappointed) by the conference. He thought there were too many "fun systems" presented, but not enough thought and social commentary. Where are the reflections and where are the theories? His question was basically: "why exactly are these systems developed?" and neither me nor anyone else had a good answer for him. I guess I could have asked that very same questions myself had I not been habituated to the field and the kinds of results that "fits" and that is encouraged at HCI conferences. I do however believe that my own paper was one of those (few?) papers that did try to ask deeper questions. The paper is called "Rethinking sustainability in computing: From buzzword to non-negotiable limits" and I have written about it here on the blog before. It was a challenge to present the paper in the 15 minutes I had allotted. I was out of time surprisingly quickly and had to skip a few things of the fly. On the whole, I was still quite happy about my presentation but I guess it's for others to judge since I was all too busy talking to have time to listen to it. Other presenters at the conference included my wife, Teresa Cerratto-Pargman ("Understanding audience participation in interactive theatre performances") and my colleague Petra Björndal ("On the handling of impedance factors for establishing apprenticeship relations during field studies in industry domains").
An example of a system that I thought was "overdeveloped" (?) or "underthought" (?) - or both - had to do with "the remembering of everyday life". Do note that I have only listened to the presentation and not (yet) read the paper. The presenter compared lifelogging (taking a picture of what's in front of you every 30 seconds and archiving it) with their system that more carefully helps users select and save mementos. I presume their point was that lifelogging was not as good because of the lack of intentions and selectiveness. But the opposite of lifelogging is not selectively saving mementos, but rather to not use any technology at all and to only rely on your own unaided memory to "record" and "save" memories of important events you take part of (remember - extensions and amputations invariably go together!).
I had a big problem with the presentation already at this point (long before the actual system was presented - there is almost always a system presented in HCI papers). This research takes for granted that it is desirable to save lots of (carefully selected) stuff (photos, souvenirs and other objects) - but perhaps not too much (i.e. not a photo every 30 seconds). My question is: "why?". What if we are already drowning in memories and forgetting is just as important as remembering? Shouldn't we then also design systems that will help people forget (selectively?)? Why is nobody ever doing that? We might for starters need systems that will help us forget/remove the "clutter" of having 100.000 photos in our computers (that's one photo taken each hour you are awake during a period of 20 years)? Or rather, what is the tradeoff between remembering and forgetting, and why is the forgetting part of this equation always ignored or forgotten (sic!)? What is it with this imperative to remember? It is hardly ever discussed in depth before some brave HCI researchers attempt to design (yet another) systems that will help us remember more (and more). Is the answer simply "because we can", or are there any deeper thoughts behind this and other systems? What is ailing us as individuals and as modern societies that would be worthy causes for budding HCI researchers to tackle? Is it really that we don't remember enough of what happened 5 minutes, 5 days or 5 years ago? If we don't remember what just happened, could that not then be an effect of us being too busy collecting new mementos in the present to remember even the recent past? But then who am I to raise these issues in this, my 307th blog post since I started blogging here 50 months ago...?
There are so many unanswered questions here but it seems we as a community do not to want to think about them too deeply and instead choose to run straight ahead, designing systems that attempts to support (some relatively-easy-to-automate aspect of) our lives "because we can". Is this then not what Norman warned us about when he pointed out that we automate what can easily be automated and then leave the remaining, difficult stuff for the humans? A more sensible starting point would instead seem to be to first ask what humans need help with and then design and shape technology to satisfy these needs. I don't really feel that that is what is being done and I guess this rant also supports Henrik's opinions (above), but I'm still not finished.
What if memories and mementos can become a burden? I am here reminded of Alexander Luria's book "The mind of a mnemonist" (1987). It's a psychological case study of a man who could not forget and who for the most part supported himself as a memory artist in Soviet Russia. He had a photographic/encyclopaedic memory and could remember random series of playing cards also years later. When he tried his hand at other professions, he had problems differing between high and low, between important and trivial memories (knowledge). If I don't misremember, he might also have had problems in his personal life (marriage, friendship). Is this what HCI researchers implicitly want to turn us all into - modern "rain men" and the closest thing you can come to being a computer? What if we entertain the notion that computer-supported memory systems can/could/will/do make us more rather than less unhappy - helping us to cling to conflicts and past injustices (both personal and historical), long lost love, departed relatives and many other for the most part "unproductive" ways of dwelling on the past? What if the problem already today is that we remember too much rather than too little? What if by spending more time with our memories, we spend less time living in the present and less time thinking about and planning for the future? What if we by spending more time with technical artefacts (that helps us remember the past) also spend less time creating significant memories together with the people we hold dear in the present? I don't know, but perhaps HCI researchers should be required to read more fiction - for example this year's winner of the Noble prize in literature, Patrick Modiano, whose most important themes (that he revisits again and again in all his books) are memories and identity. From the Wikipedia article about him and his authorship:
"All of Modiano's works are written from a place of "mania." In Rue des Boutiques Obscures (Missing Person), the protagonist suffers from amnesia and travels from Polynesia to Rome in an attempt to reconnect with his past. The novel addresses the never-ending search for identity in a world where "the sand holds the traces of our footsteps but a few moments." In Du Plus Loin de l'Oubli (Out of the Dark), the narrator recalls his shadowy love affair in the 1960s with an enigmatic woman. Fifteen years after their breakup, they meet again, but she has changed her name and denies their past. What is real and what is not remain to be seen in the dreamlike novel that typifies Modiano's obsessions and elegiac prose. The theme of memory is most clearly at play in Dora Bruder ... In Modiano's 26th book L'Horizon (2011), the narrator, Jean Bosmans, a fragile man pursued by his mother's ghost, dwells on his youth and the people he has lost. Among them is the enigmatic Margaret Le Coz, a young woman he met and fell in love with in the 1960s. The two loners spent several weeks wandering the winding streets of a now long-forgotten Paris, fleeing a phantom menace. One day, however, without notice, Margaret boarded a train and vanished into the void—but not from Jean's memory."
I wonder how lifelogging and other technologies of enhancing the process of remembering through technical means would have helped - or thwarted - the characters in Modiano's novels? I have always assumed that as we get older and we retire, the future shrinks and we tend to dwell more upon the past and think less about the future. When we are young, we have most of our lives in front of us, so it would seem natural that we think more about the future than about the past. So are we then inventing technologies that inadvertently helps us age in advance by helping (encouraging?) us to more often think about the past? Perhaps "memento technologies" help us better cling to the past when we - at all ages - should be thinking and caring more about the future (our own and our grandchildren's)? Perhaps memento technology help us become more self-centered? What if we more often should heed Baumer and Silberman's suggestion and think about "When the implication is not to design (technology)"? Referring back to Henrik, I guess my problem is that there really isn't a theory of memory anywhere around - except the happy-go-lucky attitude that "remembering is good and remembering more is better". I should point out that at some point in the text above, I have moved on from this specific paper (that I haven't read) to critiquing a more general class of HCI papers.
Ok, I am the first to admit that I really don't have and idea of what I'm talking about here since this is neither my area of expertise nor research, but I sometimes get really tired of "the new new thing" and I very much like to exercise some "contrarian" thinking. Perhaps I really should read the paper in question - it would definitely be the decent thing to do after having written this much based on the presentation. Still, I suspect there are more ideas worth pondering in the few paragraphs above that in a whole bunch of HCI conference papers about systems supporting memory - since these papers will most often assume that all the really difficult problems have already been solved and then spend an inordinate amount of time devising ingenious ways to solve the relatively simple problems remaining:
"the weakness of all Utopias is this, that they take the greatest difficulty of man and assume it to be overcome, and then give an elaborate account of the overcoming of the smaller ones. They first assume that no man will want more than his share, and then are very ingenious in explaining whether his share will be delivered by motor-car or balloon." G.K. Chesterton (1905), "Heretics".