I recently (September 29) organised a (Design Fiction) workshop, "The Futures of Computing and Wisdom" at the NordiCHI 2018 conference in Oslo together with Elina Eriksson (KTH), Rob Comber (KTH), Ben Kirman (York, UK) and Oliver Bates (Lancaster, UK). Here's a summary (from the workshop Call):
There has been an increasing interest in discussing the consequences of the technologies we invent and study in HCI research, including non-technical dimensions (societal, ethical, normative) (Mankoff et al. 2013, Pargman et al. 2017). This is also apparent in the surge of interest in Design Fiction during the last 10 years (Bleecker 2009, Tanenbaum et al. 2013, Dunne and Raby 2013). Design Fictions have traditionally emphasised near-future developments, implications and consequences, but what about developments that lie one or several decades into the future? If we want to think about and discuss how computing will affect and change society decades from now, the focus cannot be on the technology itself but rather on other types of question.
This workshop will invite participants to a dialogue on the futures of computing and wisdom. Wisdom relates to the dominant paradigms of knowledge, and elucidates what might be considered responsible and wise, and why. Through collaborative imagining, we will draw attention to the consequences of the technologies we invent and study in HCI, including non-technical dimensions (societal, ethical, normative). Deploying methods from Design Fiction we will project and reflect on the future of wise computing for 2068. Extending from the near-future projects of Design Fiction, we will deploy fictional abstracts to examine how computing, through future and imagined technologies and research on HCI, AI, IoT, and related studies on Big Data and Smart Technologies, will create, question, and reinforce ways of knowing, doing and living.
What workshop participants did not know until they showed up is that the workshop had a back story. "Futures", The journal of policy, planning and futures studies celebrates its 50th anniversary with no less than three special issues. The theme of one of these special issues is "Wise Futures" and the editors invite submissions in the form of "dialogues on the futures of wisdom, i.e. what might be considered responsible and wise in 2068, and why". The more specifically ask for contributions in the form of "structured reports on conversations" so we planned and organised a "conversation" in the form of a workshop! We hope to be able to submit something to the special issue (the deadline is December 31) but us organisers have to discuss how, since there is uncertainty as "structured reports on conversations" is a new genre of texts that none of us have worked with or indeed even seen before.
To participate in the workshop, prospective participants had to submit a fictional abstract, i.e. an abstract of a scientific paper that will be written 50 years from now. Fictional abstracts is also a new genre, but we published some helpful guidelines for how to create compelling fictional abstracts on the workshop webpage. Since the submissions oftentimes were the participants' first attempt at a fictional abstract, we reviewed and gave feedback to almost all submissions and encouraged participants to rewrite their abstracts. The two exceptions were the abstracts by Sus Lyckvi (Chalmers, Swe) and Britta Schulte (UCL, UK) which were excellent already as they were submitted. I publish both these abstracts below (with permission from the authors).
In the end there were nine contributions (three more had, for various reasons, unfortunately been withdrawn before the workshop) and ten persons showed up to the workshop (including workshop organisers Daniel, Elina and Ben). The workshop itself was a success - time flew as 90 minute sessions felt like they came to an end in the blink of an eye. I have to say that it was the best workshop I have ever organised and possibly the best I have attended too. We took copious notes and also recorded parts of the workshop but have not started to look at the material collected yet.
The topic was tough; the year 2068 is far into the future, "wisdom" is elusive and the connection to computers/human-computer interaction is not necessarily obvious. It still felt like we managed to make headway and had we had some great discussions on the way.
As to the nine contributions, most fit the category "Beware!". That might be in the nature of writing up an abstract for a scientific paper; you first have to construct a problem and then go about solve it. Only a handful of papers were in the "Rejoice!" category. Mine was one, but I was sorry to learn that my abstract was a bit too convoluted - I had been overly "clever" when I wrote it and some of the finer nuances were hard to understand. It might be that that is the case for each fictional abstract. I have the distinct feeling that abstract authors could talk endlessly about about their abstract and the work that went into creating it while the reader would need to read the abstract more than once. You do have the chance though; below are three of the nine abstracts, Sus Lyckvi's "Be All In or Get All Out: Exploring Options for CAI-Workers and CAI-Technology", Britta Schulte's "DEO ex Machina: a new Framework for Virtual Agents in Automated Elderly Care Provision" and my "Dark Patches Creator Personas".
Be All In or Get All Out: Exploring Options for CAI-Workers and CAI-Technology
Sus Lyckvi (Chalmers University of Technology, Sweden)Collaborative AIs (CAIs) provide the combination of human creativity, empathy and intuition with extensive computational power and information access. Since the late 2020ies CAI-technology has advanced many research fields [2036-1, 2036-2, 2038, 2042], but it has also been misused, most notably during the First Panic . But – whereas there is a vivid discussion on the consequences of CAI-technology, little is said about the situation of CAI-workers, despite the fact that as many as 23.2 % of them are diagnosed with a personality disorders such as schizophrenia, bipolarity or depression .
In this study we made deep-interviews with 152 CAI-workers, using the insights from this in 16 tech-trials with 48 of the interviewees. Our findings show that CAI-workers are effectively excluded from society not only physically – living in closed compounds due to corporate data protection policies – but also due to the public’s attitude towards them: anger over lost jobs, envy from rejects, and the very common fear that CAIs are the last step towards fully sentient AIs . Further, there are issues of self-image, being superhuman whilst working  vs significantly less able off-duty. In effect, CAI-workers are at the same time their employer’s most valuable asset, and its slaves, contained and deprived of normal cognitive abilities. Accordingly, the tech trials indicated that prolonged CAI-state was highly favorable.
Consequently, we argue that it is time to discuss the future of CAI-technology – should it be abandoned entirely or taken further by allowing perpetual CAI-state, in effect nurturing a new type of humans?
Timeline(1)2036-1 Stavros Gkouskos,“I Saw Your Grand-grand-son Graduate”: Using CAI Gossip Algorithms to Increase the Mental Well-being of Elderly Patients, Proceedings of the 2036 CHI Conference on Human Factors in Computing System, (CHI’36), ACM Press
2036-2 Nicholas Wang, Solving Traffic-Flow Issues for Shared Autonomous Transportation, PhD- thesis for the degree of Doctor of Technology, Chalmers University of Technology 3036
2038 Barake Kansas Henry & Ireli Lyckvi, Two CAIs vs. 500 Million Sick: How We Found Patient Zero. Morgan Kaufmann Bonniers, 3038
2042 Eira Lundgren & Conor McCloud, Ensuring the Democratic Process in the Scot-Scandi Election Using CAI Technology on Citizen Input. International Journal of Interaction Design, Vol 20, Issue 2, March 2042, Springer.
2050 Eira Lundgren & Ireli Lyckvi, The Panic in 2049 – how thwarted gossip algorithms broke the West US, Random O’Reilly 2050.
2059 Charlotte Heath, Amping up information retrieval and system control with a new generation of CAIs. IEEE Transactions on CAIs and Learning Systems, Vol 11, Issue 12, December 2052
2064 Rosie Picard & Charles Francis Xavier, We Are Afraid We Can’t Do That – On Limiting Neural Connections Between CAI-Humans And Their Computer Counterpart. Science, Volume 545, Issue 8705, August 3, 2064, AAAS
2065 Elora Björk & Jari Holopainen “Lesser Than I Used To Be” On the Mental Health of CAI- workers. Proceedings of the 21st International Conference on Exo-Applications and Technology 2065 (EAT ’65), Springer
DEO ex Machina: a new Framework for Virtual Agents in Automated Elderly Care Provision
Britta F. Schulte, University College London (UK)Recent years have seen an increase of interaction between virtual agents and humans (VHI). While the adoption has been successful in many areas such as production and education, other areas and specifically elderly care show a lack of engagement. Age seems to be a defining factor as users are not used to the technology and do not benefit from its full potential. Recent updates of the VA technology specifically for the sector, aesthetic adaptions or new interfaces did not seem to have made a significant change in the area.
In this paper we present an analysis of interaction logs gathered in a care home equipped with virtual agents (VAs) throughout. Contrary to common beliefs the interaction does not break down on the VAs side, but on the human side as people reject, misinterpret or ignore the well-intentioned suggestions of the VA. Following these insights, we present a new framework to support interactions: DEO. We propose the three steps: DISPENSE and log how the human responds, EDUCATE the human of the insights he is lacking to make the necessary changes and OVERWRITE his decisions, should he repeatedly decide not to follow them. We give detailed instructions on how to best implement each step based on our results. We argue that these steps will lead to increased adherence to the suggestions by VAs even by the elderly population, thereby making the technology accessible to a wider audience.
Dark Patches Creator PersonasDaniel Sapiens Pargman, KTH Royal Institute of Technology (SWE) and Wise Person, Vienna Institute for the Betterment of Humanity (East Germania)
Dark patches have become an increasingly large problem on the Internet as of late. Their noxious effects are well known; they create pockets and corridors for illegal high-frequency communication and transactions and widen the market for dark hardware. While not in direct conflict with the 2036 global Computing Backwards Compatibility Act, their existence undermine or come into direct conflict with social equity and they directly clash with the UN Global Development Goal #17, “An affordable Internet for all”.
While much technical research has tried to find algorithmic solutions to the problem of dark patches, little is known of drivers behind their creation. We here present the results of a large-scale study of the dark patch DIY hackers and programmers-for-hire in three European countries. Besides the results of the study itself, we also present five fictive dark patch creator personas (”psychological profiles”).
Since we nowadays take the equitable sharing of limited resources such as the Internet for granted, we have to be all the more vigilant when various kinds of deviants and perverts try to appropriate more than their fair share of The Commons. In that vein, we end the paper with suggestion for future work that will help crime and counter-terrorism agencies in their work of understanding, identifying and apprehending dark patch creators. This works should be seen as a complement to more technically oriented measures of identifying and neutralizing dark patch code.
Author Keywords: Dark patches; Human-Computer Interaction, personas, computer security, counterterrorism.